I was working on a crud D-app, coding in solidity. I am unable to figure out how does deletion and updation work on the blockchain network?
There is none. You can only add new transactions. You have to add a new transaction saying something was deleted or updated, and then, whoever is displaying the data needs to see the new transaction and stop displaying the data.
If you are writing a smart contract, you can delete or update data in storage, but other people can still see that data since it's in the previous transactions - there is no privacy.
Related
I want to get the "entire" ethereum blockchain data, not just from a few sets of smart contracts. By data I mean, transaction details including the generated logs.
I can get real-time data using Infura, but it's pretty much impossible to fetch all the old data, it would simply cost too much because I would simply have to do too many network requests.
I need the old data because I am trying to make an indexed database out of the "append-only" ethereum transaction data so that I can easily query it.
To be more precise, I would like to retrieve all NFT(ERC721, ERC1155) transfer transactions and their logs. So that I can do the following queries and much more: all the NFT owned by a particular wallet, transfer histories of a particular NFT token.
You can do this by
Run your own node
Query data from your node - locally it is fast
For some data, you might need to run the node in archival mode
You can use the same Web3 / JSON-RPC APIs on a local node than you are using on Infura.
Two solutions I have discovered.
Just like #Mikko has mentioned, you can run your own node. And it seemed not be as complex as I have expected. You can search for "geth" and then simply connect this node to your web3 library, just like connecting to Infura.
But I have not tried this and found a much better solution.
Google cloud Bigquery's public data set has all the old ethereum data. Bigquery is Google's data warehouse service, where you can use simple SQL to query your data. It adds new data every day. I have already tested some simple queries from its console and the result was good.
I am planning to fetch all the old data I need from bigquery and store it in my own database and afterwards get real time data from infura. Now that I dont have to fetch all the old data from infura, the price becomes very affordable.
you may check this https://github.com/blockchain-etl/ethereum-etl
It is a Python library for ETL (extract, transform and load) jobs for Ethereum blocks, transactions, ERC20 / ERC721 tokens, transfers, receipts, logs, contracts, and internal transactions.
For example, you may run the cli command
> ethereumetl export_token_transfers --start-block 0 --end-block 500000 \
--provider-uri file://$HOME/Library/Ethereum/geth.ipc --output token_transfers.csv
You may export ERC20 and ERC721 transfers by specific the block number which enable you to query the old data.
Data is also available in Google BigQuery.
I found a paper which is talking about a way to store data off-chain using the blockchain. The data are sent to the blockchain with a transaction which subsequently routes it to an off-blockchain store, while retaining only a pointer to the data on the public ledger.
In particular the paper says:
Consider the following example: a user installs an application that uses our platform for preserving her privacy. As the user signs up for the first time, a new shared (user, service) identity is generated and sent, along with the associated permissions, to the blockchain in a Taccess transaction. Data collected on the phone (e.g., sensor data such as location) is encrypted using a shared encryption key and sent to the blockchain in a Tdata transaction, which subsequently routes it to an off-blockchain key-value store, while retaining only a pointer to the data on the public ledger (the pointer is the SHA-256 hash of the data).
What I cannot understand is how they do it! If all the nodes on the blockchain have to execute that very transaction, it means that they all have to save those information off-blockchain causing a duplication of contents. Did I get it wrong?
After a quick glance at the paper in question, it makes no mention of storage replication. The use case they are describing here is to use blockchain transactions as references to physical data that is stored somewhere. The data can be accessed by anyone who has the reference to it; i.e. access to that particular blockchain system, however the data is encrypted such that only parties with the encryption key can actually decipher it. This approach allows for quick validation of data integrity while maintaining privacy.
From the perspective of the blockchain node all they see is a transaction that will be added to their local ledger, they don't actually save the data themselves.
I want to store Personel data to BlockChain for a company. We want to prove that the data is unchangeable. A Customer in the blockchain will not access or see any other customer data.
But Company will access all customer data and can make any operation and also can follow any operation, any access Log.
Company will store new form type(Personal data) and flag it as a personal data card.
Is it possible with Blockchain?
The best method would be to encrypt the data, but it really depends upon what you are doing with it. If you need to do operations on it, then you will have to use zk-SNARKs, but these are a new field and you would have to do a lot of research to get it working. If you aren't using the data for anything; it's just metadata, then why would you need it to be on a public ledger and validated?
Plus, there is one big problem about storing sensitive data on the blockchain: the blockchain is immutable and once something is on the blockchain, it is stored forever. So what if there comes a time when quantum computers become so powerful that they can break all encryption we have today? Then all your users' personal data will be public on the blockchain.
I'm new to IBM Hyperledger Fabric.
While trying to go over documents, I see there are couple states
getState, putState, delState., etc
https://github.com/hyperledger/fabric/blob/master/core/chaincode/shim/chaincode.go
I'm wondering if ledger is 'immutable and chained', how can we 'delete' the state?
Given that it is a ledger which is chained by each transaction or transactions, wouldn't it be impossible to delete state or at least corrupt the chains of hash?
Thank you!
There is a state database that stores keys and their values. This is different from the sequence of blocks that make up the blockchain. A key and its associated value can be removed from the state database using the DelState function. However, this does not mean that there is an alteration of blocks on the blockchain. The removal of a key and value would be stored as a transaction on the blockchain just as the prior addition and any modifications were stored as transactions on the blockchain.
Concerning different hashes, it is possible that block hashes could diverge if there is non-deterministic chaincode. Creating chaincode that is non-deterministic should be avoided. Here is a documentation topic that discusses non-deterministic chaincode.
The history of a key can be retrieved after the key is deleted. There is a GetHistoryForKey() API that retrieves the history and part of its response is an IsDeleted flag that indicates if the key was deleted. It would be possible to create a key, delete the key, and then create the key again; the GetHistoryForKey() API would track such a case.
The state database stores the current state, so the key and its value are deleted from the state database. The GetHistoryForKey() API reviews the chain history and not the state database to find prior key values.
There is an example that illustrates use of the GetHistoryForKey() API. See the getHistoryForMarble function.
I am using Microsoft Synch Service Framework 4.0 for synching Sql server Database tables with SqlLite Database on the Ipad side.
Before making any Database schema changes in the Sql Server Database, We have to Deprovision the database tables. ALso after making the schema changes, we ReProvision the tables.
Now in this process, the tracking tables( i.e. the Synching information) gets deleted.
I want the tracking table information to be restored after Reprovisioning.
How can this be done? Is it possible to make DB changes without Deprovisioning.
e.g, the application is in Version 2.0, The synching is working fine. Now in the next version 3.0, i want to make some DB changes. SO, in the process of Deprovisioning-Provisioning, the tracking info. gets deleted. So all the tracking information from the previous version is lost. I do not want to loose the tracking info. How can i restore this tracking information from the previous version.
I believe we will have to write a custom code or trigger to store the tracking information before Deprovisioning. Could anyone suggest a suitable method OR provide some useful links regarding this issue.
the provisioning process should automatically populate the tracking table for you. you don't have to copy and reload them yourself.
now if you think the tracking table is where the framework stores what was previously synched, the answer is no.
the tracking table simply stores what was inserted/updated/deleted. it's used for change enumeration. the information on what was previously synched is stored in the scope_info table.
when you deprovision, you wipe out this sync metadata. when you synch, its like the two replicas has never synched before. thus you will encounter conflicts as the framework tries to apply rows that already exists on the destination.
you can find information here on how to "hack" the sync fx created objects to effect some types of schema changes.
Modifying Sync Framework Scope Definition – Part 1 – Introduction
Modifying Sync Framework Scope Definition – Part 2 – Workarounds
Modifying Sync Framework Scope Definition – Part 3 – Workarounds – Adding/Removing Columns
Modifying Sync Framework Scope Definition – Part 4 – Workarounds – Adding a Table to an existing scope
Lets say I have one table "User" that I want to synch.
A tracking table will be created "User_tracking" and some synch information will be present in it after synching.
WHen I make any DB changes, this Tracking table "User_tracking" will be deleted AND the tracking info. will be lost during the Deprovisioning- Provisioning process.
My workaround:
Before Deprovisioning, I will write a script to copy all the "User_tracking" data into another temporary table "User_tracking_1". so all the existing tracking info will be stored in "User_tracking_1". WHen I reprovision the table, a new trackin table "User_Tracking" will be created.
After Reprovisioning, I will copy the data from table "User_tracking_1" to "User_Tracking" and then delete the contents from table "User_Tracking_1".
UserTracking info will be restored.
Is this the right approach...