I have created simple blockchain application using NodeJS. The blockchain data file is getting stored on local File System. There is no mining blocks, no difficulty level involved in this blockchain.
Please suggest, if I can host this application on private ethereum / hyperledge, and what all changes I would need to do for this? Below code I'm using for creating blocks.
Sample Genesis Block
[{"index":0,"previousHash":"0","timestamp":1465154705,"transaction":{"id":"0","transactionHash":"0","type":"","data":{"StudInfo":[{"id":"","studentId":"","parenterId":"","schemeId":"","batchId":"","instructorId":"","trainingId":"","skillId":""}]},"fromAddress":""},"hash":"816534932c2b7154836da6afc367695e6337db8a921823784c14378abed4f7d7"}]
Sample Code(NodeJS)
var generateNextBlock = (blockData) => {
var previousBlock = getLatestBlock();
var nextIndex = previousBlock.index + 1;
var nextTimestamp = new Date().getTime() / 1000;
var nextHash = calculateHash(nextIndex, previousBlock.hash, nextTimestamp, blockData);
return new Block(nextIndex, previousBlock.hash, nextTimestamp, blockData, nextHash);
};
var calculateHashForBlock = (block) => {
return calculateHash(block.index, block.previousHash, block.timestamp, block.transaction);
};
var calculateHash = (index, previousHash, timestamp, transaction) => {
return CryptoJS.SHA256(index + previousHash + timestamp + transaction).toString();
};
var addBlock = (newBlock) => {
if (isValidNewBlock(newBlock, getLatestBlock())) {
blockchain.push(newBlock);
blocksDb.write(blockchain);
}
};
var isValidNewBlock = (newBlock, previousBlock) => {
if (previousBlock.index + 1 !== newBlock.index) {
console.log('invalid index');
return false;
} else if (previousBlock.hash !== newBlock.previousHash) {
console.log('invalid previoushash');
return false;
} else if (calculateHashForBlock(newBlock) !== newBlock.hash) {
console.log(typeof (newBlock.hash) + ' ' + typeof calculateHashForBlock(newBlock));
console.log('invalid hash: ' + calculateHashForBlock(newBlock) + ' ' + newBlock.hash);
return false;
}
return true;
};
Congratulations if you have gotten this far you have successfully setup Geth on AWS. Now we will go over how to configure an Ethereum node. Make sure you are in your home directory on your cloud server with the pwd command and then create a new folder called whatever you want to create the genesis block of your Ethereum blockchain. You can do this with the following commands. The first command is to create the folder, change directory into the folder and then create a file called genesis.json.
mkdir mlg-ethchain cd mlg-ethchain nano genesis.json
To create a private blockchain you need to define the genesis block. Genesis blocks are usually embedded in the client but with Ethereum you are able to configure a genesis block using a json object. Paste the following JSON object into your genesis.json file and we explain each variable in the following section.
{
"nonce": "0xdeadbeefdeadbeef",
"timestamp": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"extraData": "0x0",
"gasLimit": "0x8000000",
"difficulty": "0x400",
"Mixhash":
"0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x3333333333333333333333333333333333333333",
"alloc": {
}
}
The coinbase is the default address of the mining address. Because you have not created a wallet yet this can be set to whatever you want provided that it is a valid Ethereum address. Difficulty is how hard it is for a miner to create a new block. For testing and development purposes it is recommended that you start with a low difficulty and then increase it. The parentHash is the hash of the parent block which doesnt exist because this is the first block in the chain. The gasLimit is the maximum amount of gas that is required to execute a transaction or send tokens from one account to another. The nonce is a random number for this block. This is the number that every miner has to guess to define the block but for the first block it must be hardcoded. You are able to provide any extraData is the extraData section. In the alloc section you can allocate a number of premined tokens or ether to certain addresses at the beginning of the blockchain. We do not want to do this so we will keep it blank.
After you have confirmed this you can check the file to make sure it has been configured correctly with the cat command. From the same directory input this command.
cat genesis.json
From what I understand, your JS code does some sort of low-level block generation (albeit in a centralised/standalone way). I'm not sure of the purpose of that code, but it's not an app that you could "port to" Ethereum or Hyperledger [Fabric], since the role of those blockchain engines is precisely to implement the block generation logic for you.
A JS app designed to work with Ethereum wouldn't do any of the block management. At the contrary, it would perform high-level and client-level interaction with a smart contract, which is basically a class (similar to Java classes) available on the whole network and whose methods are guaranteed by the network to do what they're meant to. Your JS app would essentially call the smart contract's methods as a client without caring about any block production issues.
In other, more vague but maybe more familiar terms:
Client: the JS app
Server/backend: the smart contract
Infrastructure/engine: Ethereum
On Ethereum, the way you get to the smart contract as a client is by sending RPC calls in JSON (hence the name JSON-RPC) to the contract's methods. The communication is done by embedding that JSON over HTTP to an Ethereum node, preferably your own. In Javascript, a few libraries such as web3 give you a higher-level abstraction view so that you don't need to care about JSON-RPC and you can think of your contract's methods as normal Javascript functions.
Also, since you're asking about private Ethereum: another consequence of that distribution of layers is that client code and smart contracts don't need to care about whether the Ethereum network is a public or a private one, i.e. what consensus protocol is in place. To make a bold analogy, it's similar to how SQL queries or schemas stay the same no matter how the database is persisted on disk.
Interaction with Hyperledger Fabric is similar in concept, except that you do plain REST calls to an HTTP endpoint exposed by the network. I'm not sure what client-level abstraction libraries are available.
Related
I have read that blockchains aren't good at storing large files. But you can create a hash of a file and somehow store that in the blockchain, they say. How do I do that? What API or service should I be looking at? Can I do it on bitcoin.org, or does it need to be some special "parameterized" blockchain service of some sort?
In the blogs I've seen touting how cool it is where you can store your data in mysql but store the hash in the blockchain, they don't show how you actually nitty-gritty get the hash into the blockchain, or even what services are recommended to do this. Where is the API? What field do I pass to the hash into?
This is what I think the frontend would look like:
hashedData = web3.utils.sha3(JSON.stringify(certificate));
contracts.mycontract.deployed().then(function(result) {
return result.createCertificate(public_addresskey,hashedData,{ from: account }); //get logged in public key from metamask
}).then(function(result) {
//send post request to backend to create db entry
}).catch(function(err) {
console.error(err);
// show error to the user.
});
The contract might look something like the sketch below.
There is a struct to contain everything but the id. A mapping from hash => Cert for random access (using the hash as id) and a certList to enumerate the certificates in case the hash isn't known. That should not happen because it emits events for each important state change. You would probably want to protect the newCert() function with onlyOwner, a whiteList or role-based access control.
To "confirm" a cert, the recipient confirms by signing a transaction. The existence of a true in this field shows that the user mentioned did indeed sign because there is no other way for this to occur.
pragma solidity 0.5.1;
contract Certificates {
struct Cert {
address recipient;
bool confirmed;
}
mapping(bytes32 => Cert) public certs;
bytes32[] public certList;
event LogNewCert(address sender, bytes32 cert, address recipient);
event LogConfirmed(address sender, bytes32 cert);
function isCert(bytes32 cert) public view returns(bool isIndeed) {
if(cert == 0) return false;
return certs[cert].recipient != address(0);
}
function createCert(bytes32 cert, address recipient) public {
require(recipient != address(0));
require(!isCert(cert));
Cert storage c = certs[cert];
c.recipient = recipient;
certList.push(cert);
emit LogNewCert(msg.sender, cert, recipient);
}
function confirmCert(bytes32 cert) public {
require(certs[cert].recipient == msg.sender);
require(certs[cert].confirmed == false);
certs[cert].confirmed = true;
emit LogConfirmed(msg.sender, cert);
}
function isUserCert(bytes32 cert, address user) public view returns(bool) {
if(!isCert(cert)) return false;
if(certs[cert].recipient != user) return false;
return certs[cert].confirmed;
}
}
The short answer is: "Use Ethereum smart contract with events and don't store hash values in the contract."
On Etherscan your may find a complete contract for the storage of hash values in Ethereum: HashValueManager. As you will see in this code example - things are a bit more complex. You have to manage access rights and here the cost was not important (this contract has been designed for a syndicated blockchain)
Actually it is very expensive to store hash data in a blockchain. When you use an Ethereum contract to store the hash data, you have to pay about 0.03 Euro (depending on the ETH and Gas prize) for each transaction.
It would be a better option not to store the hash value in the contract. A much more cheaper alternative is to emit indexed events in a smart contract and query the events (this is very fast). Information about the events are stored in a bloom filter of the mined block and will be readable forever. The only disadvantage is that an event can't be read from within the contract (but this is not your business logic)
You may read more details here about events and how to fetch them in an application : "ETHEREUM EVENT EXPLORER FOR SMART CONTRACTS" (disclaimer: this article is my blog). Also the complete source code is available on GitHub.
For store there is existing several ways. More interesting question: how would you planning to retrieve and use your data? I assume, you planning go retrieve by some way. Otherwise, if no retrieve - there is no sense to store.
OK. there is 2 ways to store your hashes. I will demonstrate examples within emercoin blockchain.
Store as OP_RETURN UTXO. To do this, use the command "createrawtransaction" by using JSON RPC API or command line interface. Just specify "data" for define UTXO, contains your hash.
In this case, data will be stored within one of unspendable transaction output. This saved data is not indexed, and you need to save TXID for future retrieval. Otherwise, you need rescan the entire blockchain for retrieve your data.
You can store data within indexed NVS-subsystem (Name-Value Storage). to upload, you need run the command "name_new", where hash can be your search key. In future, you can quickly retrieve your data by search key with commands "name_show" or "name_history". Also, there is possible to save your data as DNS-records within emerDNS, and retrieve by DNS UDP requests by protocol RFC1035.
For details, see the doc: https://emercoin.com/en/documentation/emercoin-api
At first I was thinking about simply letting them see a Google Drive link but that would have to stay the same so the customers could just send the link to one another, even though the people should have to use the smart contract to get it.
You can solve this with https://ipfs.io (a distributed file store) and an Ethereum DApp. Given that you mention that you want to require people to use a smart contract to get access to the file, I think you'll want to encrypt the file before storing on IPFS. Because the IPFS is distributed, anyone could get the file if they know it exists but only someone with access to the private key could read it.
When the conditions of the smart contract have been met, you can release the key. Because an Ethereum smart contract can't do the actual decryption, your web3 app would need to do this part.
For example, in the web3 app after the file is stored on the IPFS:
addNewFile: function() {
// encrypt the file
var encryptedFile = CryptoJS.AES.encrypt(fileData, passPhrase);
// store on the IPFS
ipfs.files.write( ... => {
// make sure it worked
...
// (fileName is an IPFS-generated hash uniquely identifying the (encrypted) file
// write it to the contract
deployedFileContract.AddFileEntry(fileLocation, fileName, passPhrase, function(error) {
// respond to error condition
});
});
};
retrieveEncyptedFile: function() {
...
deployedFileContract.RetrieveFileEntry.call(_pointer-to-file_, function(error, result) {
// fetch an encrypted file as a readable stream from IPFS
// decrypt it and probably do something funky with mime types to make sure it gets saved
var decryptedFile = CryptoJS.AES.decrypt(encryptedFileStream, passPhrase). ...
});
...
};
All up you would need your Solidity smart contract and a web3 app. You might run the IPFS js-client https://github.com/ipfs/js-ipfs in the browser, or access it via Nodejs or Go.
I am writing an application where the Client issues commands to a web service (CQRS)
The client is written in C#
The client uses a WCF Proxy to send the messages
The client uses the async pattern to call the web service
The client can issue multiple requests at once.
My problem is that sometimes the client simply issues too many requests and the service starts returning that it is too busy.
Here is an example. I am registering orders and they can be from a handful up to a few 1000s.
var taskList = Orders.Select(order => _cmdSvc.ExecuteAsync(order))
.ToList();
await Task.WhenAll(taskList);
Basically, I call ExecuteAsync for every order and get a Task back. Then I just await for them all to complete.
I don't really want to fix this server-side because no matter how much I tune it, the client could still kill it by sending for example 10,000 requests.
So my question is. Can I configure the WCF Client in any way so that it simply takes all the requests and sends the maximum of say 20, once one completes it automatically dispatches the next, etc? Or is the Task I get back linked to the actual HTTP request and can therefore not return until the request has actually been dispatched?
If this is the case and WCF Client simply cannot do this form me, I have the idea of decorating the WCF Client with a class that queues commands, returns a Task (using TaskCompletionSource) and then makes sure that there are no more than say 20 requests active at a time. I know this will work but I would like to ask if anyone knows of a library or a class that does something like this?
This is kind of like Throttling but I don't want to do exactly that because I don't want to limit how many requests I can send in a given period of time but rather how many active requests can exist at any given time.
Based on #PanagiotisKanavos suggjestion, here is how I solved this.
RequestLimitCommandService acts as a decorator for the actual service which is passed in to the constructor as innerSvc. Once someone calls ExecuteAsync a completion source is created which along with the command is posted to the ActonBlock, the caller then gets back the a Task from the completion source.
The ActionBlock will then call the processing method. This method sends the command to the web service. Depending on what happens, this method will use the completion source to either notify the original sender that a command was processed successfully or attach the exception that occurred to the source.
public class RequestLimitCommandService : IAsyncCommandService
{
private class ExecutionToken
{
public TaskCompletionSource<bool> Source { get; }
public ICommand Command { get; }
public ExecutionToken(TaskCompletionSource<bool> source, ICommand command)
{
Source = source;
Command = command;
}
}
private IAsyncCommandService _innerSrc;
private ActionBlock<ExecutionToken> _block;
public RequestLimitCommandService(IAsyncCommandService innerSvc, int maxDegreeOfParallelism)
{
_innerSrc = innerSvc;
var options = new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
_block = new ActionBlock<ExecutionToken>(Execute, options);
}
public Task IAsyncCommandService.ExecuteAsync(ICommand command)
{
var source = new TaskCompletionSource<bool>();
var token = new ExecutionToken(source, command);
_block.Post(token);
return source.Task;
}
private async Task Execute(ExecutionToken token)
{
try
{
await _innerSrc.ExecuteAsync(token.Command);
token.Source.SetResult(true);
}
catch (Exception ex)
{
token.Source.SetException(ex);
}
}
}
Our organization wants to develop a "LOST & FOUND System Application" using chatbot integrated in a website.
Whenever the user starts the conversation with the chatbot, the chatbot should ask the details of lost item or item found and it should store the details in database.
How can we do it ?
And can we use our own web-service because organization doesn't want to keep the database in Amazon's Server.
As someone who just implemented this very same situation (with a lot of help from #Sid8491), I can give some insight on how I managed it.
Note, I'm using C# because that's what the company I work for uses.
First, the bot requires input from the user to decide what intent is being called. For this, I implemented a PostText call to the Lex API.
PostTextRequest lexTextRequest = new PostTextRequest()
{
BotName = botName,
BotAlias = botAlias,
UserId = sessionId,
InputText = messageToSend
};
try
{
lexTextResponse = await awsLexClient.PostTextAsync(lexTextRequest);
}
catch (Exception ex)
{
throw new BadRequestException(ex);
}
Please note that this requires you to have created a Cognito Object to authenticate your AmazonLexClient (as shown below):
protected void InitLexService()
{
//Grab region for Lex Bot services
Amazon.RegionEndpoint svcRegionEndpoint = Amazon.RegionEndpoint.USEast1;
//Get credentials from Cognito
awsCredentials = new CognitoAWSCredentials(
poolId, // Identity pool ID
svcRegionEndpoint); // Region
//Instantiate Lex Client with Region
awsLexClient = new AmazonLexClient(awsCredentials, svcRegionEndpoint);
}
After we get the response from the bot, we use a simple switch case to correctly identify the method we need to call for our web application to run. The entire process is handled by our web application, and we use Lex only to identify the user's request and slot values.
//Call Amazon Lex with Text, capture response
var lexResponse = await awsLexSvc.SendTextMsgToLex(userMessage, sessionID);
//Extract intent and slot values from LexResponse
string intent = lexResponse.IntentName;
var slots = lexResponse.Slots;
//Use LexResponse's Intent to call the appropriate method
switch (intent)
{
case: /*Your intent name*/:
/*Call appropriate method*/;
break;
}
After that, it is just a matter of displaying the result to the user. Do let me know if you need more clarification!
UPDATE:
An example implementation of the slots data to write to SQL (again in C#) would look like this:
case "LostItem":
message = "Please fill the following form with the details of the item you lost.";
LostItem();
break;
This would then take you to the LostItem() method which you can use to fill up a form.
public void LostItem()
{
string itemName = string.Empty;
itemName = //Get from user
//repeat with whatever else you need for a complete item object
//Implement a SQL call to a stored procedure that inserts the object into your database.
//You can do a similar call to the database to retrieve an object as well
}
That should point you in the right direction hopefully. Google is your best friend if you need help with SQL stored procedures. Hopefully this helped!
Yes its possible.
You can send the requests to Lex from your website which will extract Intents and Entities.
Once you get these, you can write backend code in any language of your choice and use any DB you want.
In your use case, you might just want to use Lex. PostText will be main function you will be calling.
You will need to create an intent in Lex which will have multiple slots LosingDate, LosingPlace or whatever you want, then it will be able to get all these information from the user and pass it to your web application.
This post:
http://kennytordeur.blogspot.com/2011/04/nhibernate-in-combination-with_06.html
Describes how to load an entity from a resource other than a database, in this case a webservice. This is great, but if I load a number of clients in one query, each with a different MaritialState, it will have to call the webservice for each Client. Is there a way to preload all marital states, so it doesn't have to go back and forth to the webservice for each client?
I don't think Hibernate supports this. 'n+1 select problem' is a well know issue and Hibernate has quite a few strategies for dealing with it (batches, subselects, eager fetching etc). The problem is you have 'n+1 web service call' and all these mechanisms are useless. Hibernate simply does not know about what you are doing in IUserType. It assumes that you converting already loaded data.
It looks like you will have to implement your own preloading. Something like this:
// TODO: not thread safe, lock or use ConcurrentDictionary
static IDictionary<Int32, ClientDto> _preLoadedClients
= new IDictionary<int,ClientDto>();
public Object NullSafeGet(IDataReader rs, String[] names, ...) {
Int32 clientid = NHibernateUtil.Int32.NullSafeGet(rs, names[0]);
// see if client has already been preloaded:
if(_preLoadedClients.ContainsKey(clientid)) {
return _preLoadedClients[clientid];
}
// load a batch: clientId + 1, client + 2, ... client + 100
var batchOfIds = Enumerable.Range(clientid, 100);
var clientsBatch = clientService.GetClientsByIds(batchOfIds);
_preLoadedClients.Add(clientsBatch);
return _preLoadedClients[clientid];
}