Writing flows using accounts in corda - blockchain

I am using accounts in Corda. New accounts are creating successfully in my code but, I am facing difficulties in doing these two things.
1.) How to check if account is actually created and is present in the node, means if we can check the list of all the accounts in Corda.
2.) How to write responder flow for accounts, means My transaction flow is not working properly. Is there anything to change in the responder flow classic code if we now start using the accounts library?
My Code is as follows :
#InitiatedBy(Proposal.class)
public static class ProposalAcceptance extends FlowLogic<Void> {
//private variable
private FlowSession counterpartySession;
//Constructor
public ProposalAcceptance(FlowSession counterpartySession) {
this.counterpartySession = counterpartySession;
}
#Suspendable
#Override
public Void call() throws FlowException {
SignedTransaction signedTransaction = subFlow(new SignTransactionFlow(counterpartySession) {
#Suspendable
#Override
protected void checkTransaction(SignedTransaction stx) throws FlowException {
/*
* SignTransactionFlow will automatically verify the transaction and its signatures before signing it.
* However, just because a transaction is contractually valid doesn’t mean we necessarily want to sign.
* What if we don’t want to deal with the counterparty in question, or the value is too high,
* or we’re not happy with the transaction’s structure? checkTransaction
* allows us to define these additional checks. If any of these conditions are not met,
* we will not sign the transaction - even if the transaction and its signatures are contractually valid.
* ----------
* For this hello-world cordapp, we will not implement any aditional checks.
* */
}
});
//Stored the transaction into data base.
subFlow(new ReceiveFinalityFlow(counterpartySession));
return null;
}
}

Account internally are simply corda states of type AccountInfo, so you could query the vault to list all accounts the node knows about using:
run vaultQuery contractStateType: com.r3.corda.lib.accounts.contracts.states.AccountInfo
There isn't anything specific that changes in the responder flow, please make sure you are using the correct sessions in the initiator flow. Take a look at few of the samples available in the samples repository here: https://github.com/corda/samples-java/tree/master/Accounts

To check that an account was created, you need to write flow tests; a great way to learn is to look at how R3 engineers conduct their tests.
For instance, you can find here the test scenarios for CreateAccount flow.
To get an account, the library has a very useful service KeyManagementBackedAccountService with different methods to get an account (by name, UUID, or PublicKey); have a look here.
Now regarding requesting the signature of an account, one thing that is important to understand is that it's not the account that signs the transaction, it's the node that hosts the account that signs on behalf of the account.
So let's say you have 3 nodes (A, B, and C); A initiates a flow and requests the signature of 10 accounts (5 are hosted on B, and 5 are hosted on C).
After A signs the initial transaction it will create FlowSessions to collect signatures.
Since it's the host nodes that sign on behalf of accounts, then in our example you only need 2 FlowSessions; one with node B (so it signs on behalf of the 5 accounts it hosts) and one session with node C (for the other 5 accounts).
In the responder flow nodes B and C will receive the transaction that was signed by the initiator.
Out of the box, when a node receives a transaction it looks at all of the required signatures and for each required signature if it owns the private key; it will provide that signature.
Meaning, when node B will receive the transaction; it will see 10 required signatures, and because it hosts 5 of the accounts (meaning it owns the private keys for those 5 accounts), then it will automatically provide 5 signatures.

Related

Make a Solana program pay for the transaction

I'm not sure that is possible, but what I would like to do is to make the program pay for the transaction cost and make the user use the program for free.
In my mind the process would be:
I send some Solana to the account of the program to handle future transactions.
A user interact with the function SaveData
(Inside this function in the future will be some sort of mechanism to check if the user can interact with the function without paying)
The program would pay for the transaction, so the user don't have to pay even a single Lamport.
My code is:
#[derive(Accounts)]
pub struct SaveData<'info> {
#[account(init, payer = system_program, space = 8 + 50 + 32 )]
pub data_account: Account<'info, DataState>,
#[account(mut)]
pub authority: Signer<'info>,
pub system_program: Program<'info, System>,
}
#[account]
pub struct DataState {
pub authority: Pubkey,
content: String
}
I tried setting system_program as the payer, but if I try to build the program with anchor it gives me this error:
error[E0599]: no method named `exit` found for struct `SaveData` in the current scope
--> programs/test-smart-contract/src/lib.rs:5:1
|
5 | #[program]
| ^^^^^^^^^^ method not found in `SaveData<'_>`
...
61 | pub struct SaveData<'info> {
| -------------------------- method `exit` not found for this
How can I achieve what I want to do?
Update
In order to manage this problem, I started developing this service: cowsigner.com
Unfortunately every transaction has a fee payer which is specified and needs to sign on the transaction and be a signer account.
System program is a native program on Solana and you're misusing it in your case. This is the official definition for it:
Create new accounts, allocate account data, assign accounts to owning programs, transfer lamports from System Program owned accounts and pay transaction fees.
If you wish to pay for the fees then you would need a dedicated account, that is specified as the feePayer on each transaction and signs on each transaction as well.

DDD - Concurrency with quantity

Hi everyone,
I'm a little bit lost with a problem thinking in ddd way.
Imagine you have an application to sell concert ticket. So you have an entity which is called Concert with the quantity number and a method to buy a ticket.
class Concert {
constructor(
public id: string,
public name: string,
public ticketQuantity: number,
) {}
buyTicket() {
this.ticketQuantity = this.ticketQuantity - 1;
}
}
The command looks like this:
async execute(command: BookConcertCommand): Promise<void> {
const concert = await this.concertRepository.findById(command.concertId);
concert.buyTicket();
await this.concertRepository.save(concert);
}
Imagine, your application has to carry a lot of users and 1000 users try to buy a ticket at the same when the ticketQuantity is 500.
How can you ensure the invariant of the quantity can't be lower than 0 ?
How can you deal with concurrency here because even if two users try to buy a ticket at the same time the data can be false ?
What are the patterns we can use to ensure consistency and concurrency ?
Optimistic or pessismistic concurrency can't be a solution because it will frustrate a lot of users and we try to put all our logic domain into our domain so we can't put any logic inside sql/db or use a transactional script approach.
How can you ensure the invariant of the quantity can't be lower than 0
You include logic in your domain model that only assigns a ticket if at least one unassigned ticket is available.
You include locking (either optimistic or pessimistic) to ensure "first writer wins" -- the loser(s) in a data race should abort or retry.
If your book of record was just data in memory, then you would ensure that all attempts to buy tickets for concert 12345 must first acquire the same lock. In effect, you serialize the requests so that the business logic is running one at a time only.
If your book of record was a relational database, then within the context of each transaction you might perform a "select for update" to get a copy of the current data, and perform the update in the same transaction. The database will raise it's flavor of concurrent modification exception to the connections that lose the race.
Alternatively, you use something like the semantics of a conditional-write / compare and swap: you get an unlocked copy of the concert from the book of record, make your changes, then send a "update this record if it still looks like the unlocked copy" message, if you get the response announcing you've won the race, congratulations - you're done. If not, you retry or fail.
Optimistic or pessismistic concurrency can't be a solution because it will frustrate a lot of users
Of course it can
If the concert is overbooked, they are going to be frustrated anyway
The business logic doesn't have to run synchronously with the request - it might be acceptable to write down that they want a ticket, and then contact them asynchronously to let them know that a ticket has been assigned to them
It may be helpful to review some of Udi Dahan's writing on collaborative and competitive domains; for instance, this piece from 2011.
In a collaborative domain, an inherent property of the domain is that multiple actors operate in parallel on the same set of data. A reservation system for concerts would be a good example of a collaborative domain – everyone wants the “good seats” (although it might be better call that competitive rather than collaborative, it is effectively the same principle).
You might be following these steps:
1- ReserveRequested -> ReserveRequestAccepted -> TicketReserved
2- ReserveRequested -> ReserveRequestRejected
When somebody clicks on the buy ticket button, you should create a reserve request entity, and then you can process the reservation in the background and by a queue system.
On the user side, you can return a unique reserve request-id to check the result of the process. So the frontend developer should fetch the result of process periodically until it succeeds or fails.

Distribute work stored in table to multiple processes

I have a database table where each row represents a work to be done. This table is filled up/receive work through a rest API. Apart from a rest-service taking up the work, I have another service which uses actors to process this work.
I need suggestions in distributing this work evenly across these workers. This work is not one time, it is kind of done at an interval until user deletes that.
Therefore I need a mechanism where
The work as it comes is distributed evenly.
If the second service(work consumer) fails it can again boot up with all the records in table and distribute the work again.
Each actor represents one row of the work table.
class WorkActor(workId: String)(implicit system: ActorSystem, materializer: ActorMaterializer) extends Actor {
// read the record from table or whereever you want to read
override def preStart(): Unit = {
logger.info("WorkActor start ===> " + self)
}
override def receive: Receive = {
case _ => {}
}
}
Create an Akka cluster sharding region to dispatch the request from rest api to corresponding actor. Calling startShardingRegion function to return an actorRef. Then you could send the message to this sharding actorRef by rest API, and then corresponding will help you handle the message.
final case class CommandEnvelope(id: String, payload: Any)
def startShardingRegion(role: String)(implicit system: ActorSystem) = {
ClusterSharding(system).start(
typeName = role,
entityProps = Props(classOf[WorkActor]),
settings = ClusterShardingSettings(system),
extractEntityId = ClusterConfig.extractEntityId,
extractShardId = ClusterConfig.extractShardId
)
}
// sharding key
object ClusterConfig {
private val numberOfShards = 100
val extractEntityId: ShardRegion.ExtractEntityId = {
case CommandEnvelope(id, payload) => (id, payload)
}
val extractShardId: ShardRegion.ExtractShardId = {
case CommandEnvelope(id, _) => (id.hashCode % numberOfShards).toString
case ShardRegion.StartEntity(id) => (id.hashCode % numberOfShards).toString
}
}
Read or recover the data from preStart function in the actor. There are many choice. You may read the uncompleted work from MQ (Kafka), Akka persistence (RDS, Cassandra) etc.
SBR has open source solution. That is an advanced topic if your business logic works.
https://github.com/TanUkkii007/akka-cluster-custom-downing
The general outline of a solution is to use Akka Cluster, Cluster Sharding, and Akk Cluster Singleton. When the cluster is considered formed (generally when some minimum number of members have joined the cluster), you start the Cluster Sharding system (sharding work items by the DB's primary key) and then a Cluster Singleton will read the DB table and send work items to Cluster Sharding for distribution among the nodes of the cluster. Akka Streams and particularly Alpakka's Slick JDBC integration may prove useful within the singleton. Another cluster singleton to periodically check on jobs may also be useful to recover from cluster node failures (but see below for something to consider there).
Two notes:
If using Cluster Sharding and Cluster Singleton, you probably want to consider what happens in a split-brain situation: this is a distributed system and the probability of a split-brain eventually happening can be presumed to be 100%. In the split-brain scenario, you will very likely have the same jobs being performed simultaneously by different sides of the split, so you need to ask if that's acceptable in your use-case.
If not, then you will need a component which monitors the communications between nodes in your cluster to detect a split-brain and takes steps to resolve the condition: Lightbend's Split Brain Resolver is a good choice if you aren't interested in implementing this yourself.
In a related vein, if the jobs consist of many steps which must be performed, a question to ask is, if a cluster or node fails after completing, say, eight of ten steps, is it acceptable to redo steps 1-8 vs. starting with step 9? If the answer to this is "no", then you'll need to persist the intermediate state of the job. Akka Persistence is a great choice here, though you may want to read up on event sourcing. If using Persistence with Cluster Sharding and Cluster Singleton, it should be noted, you will almost certainly need to handle split-brains (see previous item).

How to store a hash of some data in a currently publicly available blockchain?

I have read that blockchains aren't good at storing large files. But you can create a hash of a file and somehow store that in the blockchain, they say. How do I do that? What API or service should I be looking at? Can I do it on bitcoin.org, or does it need to be some special "parameterized" blockchain service of some sort?
In the blogs I've seen touting how cool it is where you can store your data in mysql but store the hash in the blockchain, they don't show how you actually nitty-gritty get the hash into the blockchain, or even what services are recommended to do this. Where is the API? What field do I pass to the hash into?
This is what I think the frontend would look like:
hashedData = web3.utils.sha3(JSON.stringify(certificate));
contracts.mycontract.deployed().then(function(result) {
return result.createCertificate(public_addresskey,hashedData,{ from: account }); //get logged in public key from metamask
}).then(function(result) {
//send post request to backend to create db entry
}).catch(function(err) {
console.error(err);
// show error to the user.
});
The contract might look something like the sketch below.
There is a struct to contain everything but the id. A mapping from hash => Cert for random access (using the hash as id) and a certList to enumerate the certificates in case the hash isn't known. That should not happen because it emits events for each important state change. You would probably want to protect the newCert() function with onlyOwner, a whiteList or role-based access control.
To "confirm" a cert, the recipient confirms by signing a transaction. The existence of a true in this field shows that the user mentioned did indeed sign because there is no other way for this to occur.
pragma solidity 0.5.1;
contract Certificates {
struct Cert {
address recipient;
bool confirmed;
}
mapping(bytes32 => Cert) public certs;
bytes32[] public certList;
event LogNewCert(address sender, bytes32 cert, address recipient);
event LogConfirmed(address sender, bytes32 cert);
function isCert(bytes32 cert) public view returns(bool isIndeed) {
if(cert == 0) return false;
return certs[cert].recipient != address(0);
}
function createCert(bytes32 cert, address recipient) public {
require(recipient != address(0));
require(!isCert(cert));
Cert storage c = certs[cert];
c.recipient = recipient;
certList.push(cert);
emit LogNewCert(msg.sender, cert, recipient);
}
function confirmCert(bytes32 cert) public {
require(certs[cert].recipient == msg.sender);
require(certs[cert].confirmed == false);
certs[cert].confirmed = true;
emit LogConfirmed(msg.sender, cert);
}
function isUserCert(bytes32 cert, address user) public view returns(bool) {
if(!isCert(cert)) return false;
if(certs[cert].recipient != user) return false;
return certs[cert].confirmed;
}
}
The short answer is: "Use Ethereum smart contract with events and don't store hash values in the contract."
On Etherscan your may find a complete contract for the storage of hash values in Ethereum: HashValueManager. As you will see in this code example - things are a bit more complex. You have to manage access rights and here the cost was not important (this contract has been designed for a syndicated blockchain)
Actually it is very expensive to store hash data in a blockchain. When you use an Ethereum contract to store the hash data, you have to pay about 0.03 Euro (depending on the ETH and Gas prize) for each transaction.
It would be a better option not to store the hash value in the contract. A much more cheaper alternative is to emit indexed events in a smart contract and query the events (this is very fast). Information about the events are stored in a bloom filter of the mined block and will be readable forever. The only disadvantage is that an event can't be read from within the contract (but this is not your business logic)
You may read more details here about events and how to fetch them in an application : "ETHEREUM EVENT EXPLORER FOR SMART CONTRACTS" (disclaimer: this article is my blog). Also the complete source code is available on GitHub.
For store there is existing several ways. More interesting question: how would you planning to retrieve and use your data? I assume, you planning go retrieve by some way. Otherwise, if no retrieve - there is no sense to store.
OK. there is 2 ways to store your hashes. I will demonstrate examples within emercoin blockchain.
Store as OP_RETURN UTXO. To do this, use the command "createrawtransaction" by using JSON RPC API or command line interface. Just specify "data" for define UTXO, contains your hash.
In this case, data will be stored within one of unspendable transaction output. This saved data is not indexed, and you need to save TXID for future retrieval. Otherwise, you need rescan the entire blockchain for retrieve your data.
You can store data within indexed NVS-subsystem (Name-Value Storage). to upload, you need run the command "name_new", where hash can be your search key. In future, you can quickly retrieve your data by search key with commands "name_show" or "name_history". Also, there is possible to save your data as DNS-records within emerDNS, and retrieve by DNS UDP requests by protocol RFC1035.
For details, see the doc: https://emercoin.com/en/documentation/emercoin-api

Aggregate root invariant enforcement with application quotas

The application Im working on needs to enforce the following rules (among others):
We cannot register a new user to the system if the active user quota for the tenant is exceeded.
We cannot make a new project if the project quota for the tenant is exceeded.
We cannot add more multimedia resources to any project that belongs to a tenant if the maximum storage quota defined in the tenant is exceeded
The main entities involved in this domain are:
Tenant
Project
User
Resource
As you can imagine, these are the relationship between entities:
Tenant -> Projects
Tenant -> Users
Project -> Resources
As a first glance, It seems the aggregate root that will enforce those rules is the tenant:
class Tenant
attr_accessor :users
attr_accessor :projects
def register_user(name, email, ...)
raise QuotaExceededError if active_users.count >= #users_quota
User.new(name, email, ...).tap do |user|
active_users << user
end
end
def activate_user(user_id)
raise QuotaExceededError if active_users.count >= #users_quota
user = users.find {|u| u.id == user_id}
user.activate
end
def make_project(name, ...)
raise QuotaExceededError if projects.count >= #projects_quota
Project.new(name, ...).tap do |project|
projects << project
end
end
...
private
def active_users
users.select(&:active?)
end
end
So, in the application service, we would use this as:
class ApplicationService
def register_user(tenant_id, *user_attrs)
transaction do
tenant = tenants_repository.find(tenant_id, lock: true)
tenant.register_user(*user_attrs)
tenants_repository.save(tenant)!
end
end
...
end
The problem with this approach is that aggregate root is quite huge because it needs to load all users, projects and resources and this is not practical. And also, in regards to concurrency, we would have a lot of penalties due to it.
An alternative would be (I'll focus on user registration):
class Tenant
attr_accessor :total_active_users
def register_user(name, email, ...)
raise QuotaExceededError if total_active_users >= #users_quota
# total_active_users += 1 maybe makes sense although this field wont be persisted
User.new(name, email, ...)
end
end
class ApplicationService
def register_user(tenant_id, *user_attrs)
transaction do
tenant = tenants_repository.find(tenant_id, lock: true)
user = tenant.register_user(*user_attrs)
users_repository.save!(user)
end
end
...
end
The case above uses a factory method in Tenant that enforces the business rules and returns the User aggregate. The main advantage compared to the previous implementation is that we dont need to load all users (projects and resources) in the aggregate root, only the counts of them. Still, for any new resource, user or project we want to add/register/make, we potentially have concurrency penalties due to the lock acquired. For example, if Im registering a new user, we cannot make a new project at the same time.
Note also that we are acquiring a lock on Tenant and however we are not changing any state in it, so we dont call tenants_repository.save. This lock is used as a mutex and we cannot take advantage of optimistic concurrency unless we decide to save the tenant (detecting a change in the total_active_users count) so that we can update the tenant version and raise an error for other concurrent changes if the version has changed as usual.
Ideally, I'd like to get rid of those methods in Tenant class (because it also prevents us from splitting some pieces of the application in their own bounded contexts) and enforce the invariant rules in any other way that does not have a big impact with the concurrency in other entities (projects and resources), but I don't really know how to prevent two users to be registered simultaneously without using that Tenant as aggregate root.
I'm pretty sure that this is a common scenario that must have a better way to be implemented that my previous examples.
I'm pretty sure that this is a common scenario that must have a better way to be implemented that my previous examples.
A common search term for this sort of problem: Set Validation.
If there is some invariant that must always be satisfied for an entire set, then that entire set is going to have to be part of the "same" aggregate.
Often, the invariant itself is the bit that you want to push on; does the business need this constraint strictly enforced, or is it more appropriate to loosely enforce the constraint and charge a fee when the customer exceeds its contracted limits?
With multiple sets -- each set needs to be part of an aggregate, but they don't necessarily need to be part of the same aggregate. If there is no invariant that spans multiple sets, then you can have a separate aggregate for each. Two such aggregates may be correlated, sharing the same tenant id.
It may help to review Mauro Servienti's talk All our aggregates are wrong.
An aggregate shoud be just a element that check rules. It can be from a stateless static function to a full state complex object; and does not need to match your persistence schema nor your "real life" concepts nor how you modeled your entities nor how you structure your data or your views. You model the aggregate with just the data you need to check rules in the form that suits you best.
Do not be affraid about precompute values and persist them (total_active_users in this case).
My recommendation is keep things as simple as possible and refactor (that could mean split, move and/or merge things) later; once you have all behavior modelled, is easier to rethink and analyze to refactor.
This would be my first approach without event sourcing:
TenantData { //just the data the aggregate needs from persistence
int Id;
int total_active_users;
int quota;
}
UserEntity{ //the User Entity
int id;
string name;
date birthDate;
//other data and/or behaviour
}
public class RegistrarionAggregate{
private TenantData fromTenant;//data from persistence
public RegistrationAggregate(TenantData fromTenant){ //ctor
this.fromTenant = fromTenant;
}
public UserRegistered registerUser(UserEntity user){
if (fromTenant.total_active_users >= fromTenant.quota) throw new QuotaExceededException
fromTeant.total_active_users++; //increase active users
return new UserRegisteredEvent(fromTenant, user); //return system changes expressed as a event
}
}
RegisterUserCommand{ //command structure
int tenantId;
UserData userData;// id, name, surname, birthDate, etc
}
class ApplicationService{
public void registerUser(RegisterUserCommand registerUserCommand){
var user = new UserEntity(registerUserCommand.userData); //avoid wrong entity state; ctor. fails if some data is incorrect
RegistrationAggregate agg = aggregatesRepository.Handle(registerUserCommand); //handle is overloaded for every command we need. Use registerUserCommand.tenantId to bring total_active_users and quota from persistence, create RegistrarionAggregate fed with TenantData
var userRegisteredEvent = agg.registerUser(user);
persistence.Handle(userRegisteredEvent); //handle is overloaded for every event we need; open transaction, persist userRegisteredEvent.fromTenant.total_active_users where tenantId, optimistic concurrency could fail if total_active_users has changed since we read it (rollback transaction), persist userRegisteredEvent.user in relationship with tenantId, commit transaction
eventBus.publish(userRegisteredEvent); //notify external sources for eventual consistency
}
}
Read this and this for a expanded explanation.