I am working with fabric chaincodes and have implemented the table concepts being provided to store data since blockchain does not allow modification or deletion, i am eager to know the internal implementation of the table format. Is there any documentation for that if yes please suggest or if anyone knows and can explain.
Thanks in advance
Tables are implemented using Protocol Buffers.
You can have a look at file table.pb.go. This file is auto-generated file by using proto message definitions in table.proto .
On top of this, chaincode API provides functions like CreateTable, GetTable, DeleteTable, GetRow, GetRows, InsertRow which you might be using in your chaincode.
Functions like CreateTable, InsertRow, DeleteTable which are supposed to do data modification internally use PutState API to write the bytevalues to ledger. PutState marshals a struct defined in file table.pb.go into bytes and stores into ledger.
Similarly functions like GetRow, GetRows, GetTable which are supposed to query data, internally use GetState API to read the bytevalues from ledger. GetState API finds the value from ledger in bytes and then unmarshals the byte value into structs.
Effectively you get to interact with Go structs without caring how internally the table is stored.
Related
I am currently learning how to start developing on ChainLink, and I saw that there is a GetRoundData() method that is used to return data from a specific timestamp.
When I dig into the code and I found that there the method came from the interface AggregatorV3Interface. Also, I didn't find the implementation of the function inside any of .sol files, but I find it in a .go file.
My question is, how is the aggregator store data on the blockchain? as I see the data come from nowhere when I call getRoundData. If data comes from the module written by Go lang, does that means the data source is off-chain? Thank you.
Code snippet captures:
aggregator_v2v3_interface.go
AggregatorV3Interface.sol
A contract implementing this interface is deployed on an address specified in the _AggregatorV2V3Interface Golang variable.
So your offchain script is connected to a node of some EVM network (Ethereum, BSC, Polygon, ...) and queries the node to perform a read-only, gas-free, call on that specific contract address. The actual data is stored onchain.
A co-worker and I have been discussed the best way to store data in memory within our C++ server. Basically, we need to store all requisitions made by clients. Those requisitions come as JSONs objects, so each requisition may have different number of parameters. Later, clients can ask the server for a list of those requisitions.
The total number of requisitions is small (order of 10^3). Clients ask for the list of requisitions using pagination.
So my question is what is the standard way of doing that?
1) Create a class that stores every JSON and then, when requested, send the list of those JSONs.
2) Deserialize the JSON, store it in a class then serialize the data again when requested.
If 2, what is the best way of doing that in modern C++?
3) Another option?
Thank you.
If the client asks you to support JSON, the are only two steps you need to do:
Add some JSON (e.g this) library with a suitable license to project.
Use it.
If the implementation of JSON is not the main goal of the project, this should work.
Note: you can also get a lot of design hints inspecting the aforementioned repo.
I need random access to my files stored in a MongoDB using the GridFS specification. It seems that the C++ driver (mongocxx) doesn't provide an interface for doing that. I can create a mongocxx::gridfs::downloader object from a mongocxx::gridfs::bucket, however the only "lower level" read operation I can find is
std::size_t read(std::uint8_t *buffer, std::size_t length)
What I miss is a third parameter std::size_t offset. My current workaround is to circumvent the mongocxx::gridfs API completely, i.e., querying the chunks collection and creating the needed buffer manually. But I actually would like to use the driver's API for that.
Is there an API for my use case in the mongocxx driver that I didn't see or should I write a feature request?
After having a closer look into all related sources and after discussing this question also in the mongodb user group, the answer to this question is
No, by now there is no API for partial file retrieval using the GridFS API of the mongo C++ driver.
I have filed an a feature request in the MongoDB JIRA system for it.
I believe it is possible to do by manual iteration on chunks through c++ driver. You have to figure out which chunks contain this range of data, than read them and merge obtained data together...
I'm currently applying boto3 with dynamodb, and I noticed that there are two types of batch write
batch_writer is used in tutorial, and it seems like you can just iterate through different JSON objects to do insert (this is just one example, of course)
batch_write_items seems to me is a dynamo-specific function. However, I'm not 100% sure about this, and I'm not sure what's the difference between these two functions (performance, methodology, what not)
Do they do the same thing? If they are, why having 2 different functions? If they're not, what's the difference? How's the performance comparison?
As far as I understand and use these APIs, with the batch_write_item(), you can even handle the data for more than one table in one query. But with batch_writer(), it means you are going to specify the actions are only applicable for a certain table. I think that should be the very basic difference I can tell you.
batch_writer creates a context manager for writing objects to Amazon
DynamoDB in batch.
The batch writer will automatically handle buffering and sending items
in batches.
In addition, the batch writer will also automatically handle any
unprocessed items and resend them as needed. All you need to do is
call put_item for any items you want to add, and delete_item for any
items you want to delete.
In addition, you can specify auto_dedup if the batch might contain
duplicated requests and you want this writer to handle de-dup for you.
source
I am trying to get a list of all devices in the system together with how they are connected. Therefore, I want to essentially clone the structure of the IO Kit services tree (that you can see with IORegistryExplorer). How do I iterate through all the keys? (One of the reasons this is confusing to me is because I dont understand what the difference between io_service, io_registry, and io_object are).
The difference between service, registry and object is only in the circumstances they are used. Otherwise they are completely the same.
From IOTypes.h:
typedef io_object_t io_registry_entry_t;
typedef io_object_t io_service_t;
There is documentation available about Traversing the I/O Registry, which also includes information on traversing the whole registry.
For each entry you then would have to get the properties and save them with your representation of the registry.
So you would use IORegistryGetRootEntry(), print/save its name and properties and then iterate over the children with IORegistryEntryGetChildIterator().
You get the properties with IORegistryEntryCreateCFProperties() following aCFDictionaryGetKeysAndValues(). For the values you then have to check what types these are to print/save them (or use CFSHOW). When you really want to clone this into a different structure (with different types) you have to handle every possible CFTypeID explicitely.
I created a working prototype at
https://gist.github.com/JonnyJD/6126680
EDIT:
In another SO answer the (C) source code of ioreg is linked. That should be a good resource for printing/extracting missing CFTypes.