Where is Hyperledger fabric chaincode store location? - blockchain

I used peer commond in cli container on chancode install.
chaincode name + varsion file was made on peer containers.
(location is /var/hyperledger/production/chaincodes)
What is this? chaincode?
I am looking for chaincode after install.

The chaincode package which is installed on the peer is technically the serialized bytes of a protobuf message. The protobuf in a ChainDeploymentSpec message, which is defined in the "github.com/hyperledger/fabric/protos/peer" package:
message ChaincodeDeploymentSpec {
enum ExecutionEnvironment {
DOCKER = 0;
SYSTEM = 1;
}
ChaincodeSpec chaincode_spec = 1;
// Controls when the chaincode becomes executable.
google.protobuf.Timestamp effective_date = 2;
bytes code_package = 3;
ExecutionEnvironment exec_env= 4;
}
The actual chaincode source files are stored in a serialized archive file stored in the code_package field of the ChainDeploymentSpec.

Related

Using debugger swo on stm32

As the title suggest I would like to use swo for debugging on an stm32F1 device without the use of st-link utils.I think I have flashed the code to send messages via swo on my chip but I can not establish a connection with the correct port as some tutorials suggested I use putty with the telnet port 2332 .
Specifics about my configuration:
Relative code on my main
int _write(int file, char *ptr, int len)
{
int DataIdx;
for (DataIdx = 0; DataIdx < len; DataIdx++)
{
__io_putchar(*ptr++);
}
return len;
}
...
int main(){
while (1)
{
printf("Hi\n");
}
}
I have seen some tutorials using eclipse configurations or st-link utils but I have a very custom toolchain (I use conan, obko cmake configurations https://github.com/ObKo/stm32-cmake , ocb and linux) for the sole purpose of using c++ 2017 and libraries like boost so I can not use st-link utils.
You need to enable ITM port 0 (or another one) then output the chars using this port.
When it works you can wrap it into the printf
uint32_t ITM_SendChar (uint32_t ch)
{
if (((ITM->TCR & ITM_TCR_ITMENA_Msk) != 0) && /* ITM enabled */
((ITM->TER & 1) != 0) ) /* ITM Port #0 enabled */
{
while (ITM->PORT[0].u32 == 0)
{
__NOP();
}
ITM->PORT[0].u8 = (uint8_t)ch;
}
return (ch);
}
The problem with my configurations was that I didn't have a parser on my computer for the ITM messages so even after having the right configurations on the chip when I was opening the port to communicate with the chip my terminal was empty of messages . Hopefully I found this very helpful parser on github so all I had to do after compiling the program with the above configurations was to run openocd with configuring two options about the frequency of the f1 chip and the
fact that ITM port was open (here is my .cfg file ):
source [find interface/stlink-v2.cfg]
source [find target/stm32f1x.cfg]
init
tpiu config internal - uart off 72000000
itm ports on
and on another terminal I runed the parser script from github python3 swo_parser.py

Linkages of Blocks in Hyperledger Blockchain

In case of Bitcoin Blockchain blocks are linked with hash of previous block. How blocks are linked in the Hyperledger Fabric blockchain? And how to visualize them?
You can find how block in structured in Hyperledger Fabric, by inspecting protobuf definitions, here. Where block defined as:
message Block {
BlockHeader header = 1;
BlockData data = 2;
BlockMetadata metadata = 3;
}
and you will drill down further to definition of BlockHeader:
message BlockHeader {
uint64 number = 1; // The position in the blockchain
bytes previous_hash = 2; // The hash of the previous block header
bytes data_hash = 3; // The hash of the BlockData, by MerkleTree
}
you will see that blocks in Hyperledger Fabric are chained with hashes in very similar fashion to Bitcoin and many other blockchain solutions. The hash (H) of the block N computed by following scheme:
H(H) = hash([transactions] || H(N-1) || N)
E.g. content of the block, previous block hash and the block sequence number.
Many aspects have good coverage in "Getting Started" documentation.
If you are using IBM Bluemix here is simple tutorial with GO
# Create the parent directories on your GOPATH
mkdir -p $GOPATH/src/github.com/hyperledger
cd $GOPATH/src/github.com/hyperledger
# Clone the appropriate release codebase into $GOPATH/src/github.com/hyperledger/fabric
# Note that the v0.5 release is a branch of the repository. It is defined below after the -b argument
git clone -b v0.5-developer-preview https://github.com/hyperledger-archives/fabric.git
If you are installing IBM BLUEMIX v0.6 release, use this for your git clone command:
# The v0.6 release exists as a branch inside the Gerrit fabric repository
git clone -b v0.6 http://gerrit.hyperledger.org/r/fabric
If the fabric is not installed properly on your GOPATH, you will see errors like the one below when building your chaincode
Apart from you can simply refer documentation for linking process!

Access Strucuture Sensor via OpenNI 2 from UWP App targeting Windows 10

I'd like to access the Structure Sensor (https://structure.io) via OpenNI 2 (https://github.com/occipital/openni2) from an UWP App running on a Windows 10 Desktop.
The Setup
For this very reason I created a Windows Runtime Component (Universal Windows) in C++ besides my actual UWP App. This component exports several functions basically miming the initialization behavior of one of the samples in above OpenNI Github repo.
I extended the code to also iterate through all available devices:
// Initialize OpenNI
Status rc = OpenNI::initialize();
if (rc != STATUS_OK)
{
logError("Initialize failed + " + std::string(OpenNI::getExtendedError()));
return false;
}
// Get all attached sensors supported by OpenNI
Array<DeviceInfo> deviceList;
OpenNI::enumerateDevices(&deviceList);
for (int i = 0; i < deviceList.getSize(); i++) {
logInfo(deviceList[i].getName());
logInfo(deviceList[i].getUri());
}
// Actual open device
Device device;
rc = device.open(deviceList[0].getUri());
if (rc != STATUS_OK)
{
logError("Error = " + to_string(rc));
logError("Couldn't open device " + std::string(OpenNI::getExtendedError()));
return false;
}
The Problem
Calling above code from my UWP app through the Windows Runtime Component is successful when initializing OpenNI and enumerating over all available devices:
[INFO] PS1080
[INFO] \\?\usb#vid_1d27&pid_0600#13261#{c3b5f022-5a42-1980-1909-ea72095601b1}
Actually Opening the device via device.open is the actual problem (Error 1 = STATUS_ERROR)
[ERROR] Error = 1
[ERROR] Couldn't open device Could not open "\\?\usb#vid_1d27&pid_0600#13261#{c3b5f022-5a42-1980-1909-ea72095601b1}": USB device not found!
I'm also under the impression that above error message is a bit misleading, as the actual message when having no device attached is:
[ERROR] Error = 1
[ERROR] Couldn't open device DeviceOpen using default: no devices found
I already tried to add a USB device capability to the package mainifest without any success.
<DeviceCapability Name="usb">
<Device Id="vidpid:1D27 0600">
<!--<Function Type="classId:ff * *" />-->
<Function Type="name:vendorSpecific"/>
</Device>
</DeviceCapability>
I also verified that above code is working when directly building a classic C++ program without targeting UWP at all.
I would be very happy for any direction/hint you can provide me with
The Solution
Microsoft's UWP policy simply doesn't allow access to USB devices. But you can workaround that by grant the UWP AppContainer Process access to the Structure IO sensor.
Some manual work is required though:
Open the registry editor and go to the USB entry of the Structure IO sensor HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_1D27&PID_0600
Uncollapse this node and right click on the node below this one and copy the whole key in to clipboard. The last digit is very important here and differs from machine to machine (e.g. HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_1D27&PID_0600\13261)
Copy this key into following registry file where the brackets are. The following registry modification basically allows every UWP app access to the structure I/O sensor.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_1D27&PID_0600\13261]
"Security"=hex:1,0,4,90,0,0,0,0,0,0,0,0,0,0,0,0,14,0,0,0,2,0,60,0,4,0,0,0,0,0,14,0,0,0,0,10,1,1,0,0,0,0,0,5,12,0,0,0,0,0,18,0,0,0,0,10,1,2,0,0,0,0,0,5,20,0,0,0,20,2,0,0,0,0,14,0,0,0,0,10,1,1,0,0,0,0,0,5,B,0,0,0,0,0,18,0,0,0,0,10,1,2,0,0,0,0,0,F,2,0,0,0,1,0,0,0
Modify the following batch script to point to the full path(!) of above registry file. This script is needed to modify the registry with the security string. This is only allowed by Window's SYSTEM account. That's why we need to create a task for that.
call schtasks /create /RU SYSTEM /SC ONCE /TN DeviceAC /TR "reg import c:\full\path\to\registry\file.reg" /ST 00:00
call schtasks /run /tn DeviceAC
call schtasks /delete /tn DeviceAC /f
Run the above batch script with Administrator privileges
After successfully executing the script, make sure there's a new entry called "Security" below you node from 2.
If the sensor is already connected to your PC you'll need to reconnect it.
Now the code from my Question above should work :)
You'll find a detailed read on how to solve this problem here:
https://developer.microsoft.com/en-us/windows/iot/Samples/CustomDeviceAccessor

Google Dataflow job failed with "insufficient data uploaded" error

I am trying to create a dataflow job that processes a few thousand files, and for each file, write to a different destination in GCS.
I have to do a lot of TextIO as source and write to destination as separate flow. A sample code snippet looks like this:
List<PCollection<String>> pcs = new ArrayList<>();
for(int i = 0; i < 2000; i++) {
pcs.add(p.apply(TextIO.Read.from("gs://wushilin-asia/some-folder/input-" + i + "/*")));
}
for(int i = 0; i < 2000; i++) {
pcs.get(i).apply(TextIO.Write.to("gs://wushilin-asia/some-folder/output-" + i + "/"));
}
p.run();
This fails silently (seems hanging forever) with error "insufficient data uploaded" in the backend.
What is going wrong here?
It turned out to be that the dataflow structure is too complicated and dataflow job metadata storage can't handle it. Reducing to less components solved this issue

Improving Akka Remote Throughput

We're thinking about using Akka for client server communications, and trying to benchmark data transfer. Currently we're trying to send a million messages where each message is a case class with 8 string fields.
At this point we're struggling to get acceptable performance. We see about 600KB/s transfer rate and idle CPUs on client and server, so something is going wrong. Maybe it's our netty configuration.
This is our akka config
Server {
akka {
extensions = ["akka.contrib.pattern.ClusterReceptionistExtension"]
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
log-dead-letters = 10
log-dead-letters-during-shutdown = on
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "instance.myserver.com"
port = 2553
maximum-frame-size = 1000 MiB
send-buffer-size = 2000 MiB
receive-buffer-size = 2000 MiB
}
}
cluster {
seed-nodes = ["akka.tcp://server#instance.myserver.com:2553"]
roles = [master]
}
contrib.cluster.receptionist {
name = receptionist
role = "master"
number-of-contacts = 3
response-tunnel-receive-timeout = 300s
}
}
}
Client {
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 0
maximum-frame-size = 1000 MiB
send-buffer-size = 2000 MiB
receive-buffer-size = 2000 MiB
}
}
cluster-client {
initial-contacts = ["akka.tcp://server#instance.myserver.com:2553/user/receptionist"]
establishing-get-contacts-interval = "10s"
refresh-contacts-interval = "10s"
}
}
}
UPDATE:
In the end, notwithstanding the discussion about serialisation (see below) we just converted our payloads to use byte arrays, that way serialisation would not affect the tests. We found that on core i7 with 24gb ram using jeroMQ (ie zeroMQ re-implemented in java - so still not the fastest) we saw consistently about 200k msgs/sec or about 20 MB/sec whilst on raw akka (ie without zeroMQ plugin) we saw about 10k msgs/sec or just under 1MB/sec. Trying on akka + zeroMQ made the performance worse.
To make it easy to get started with remoting akka uses Java serialization for message serialization, this is not what you'd typically use in production because it's not very fast and does not handle message versioning well.
What you should do is use kryo or protobuf for serialization and you should be able to get much better numbers.
You can read about how here, there are also a few links to available serializers at the bottom of the page: http://doc.akka.io/docs/akka/current/scala/serialization.html
Ok here's what we discovered:
Using our client server model we were able to get about < 3000 msg/sec without doing anything, which was not really acceptable to us, but we were really confused with what was happening because we weren't able to max out the cpu.
Therefore we went back to the akka source code and found a benchmarking sample there:
sample.remote.benchmark.Receiver
sample.remote.benchmark.Sender
These are two classes that use akka remote to ping pong themselves a bunch of messages from two jvms on the same machine. Using this benchmark we were able to get around 10~15k msg/sec on corei7 with 24g, using about 50% cpu. We found that adjusting the dispatchers and threads allocated to netty made a difference, but only marginally. Using asks instead of tells, made it a bit slower, but not by much. Using a balancing pool of many actors to send and receive made it considerably slower.
For comparison we did a similar test using jeromq and we managed to get about 100k msg/sec using 100% CPU (same payload, same machine).
We also hacked the Sender and Receivers to use the Akka zeromq extension to pipe messages directly from one jvm to another bypassing akka remote all together. In this test we found that we could get fast send and receive to start with, approx 100k msg/sec, but that performance quickly degraded. On further inspection the zeromq threads and the akka actor switching do not play nice together. We probably could have tuned the performance somewhat by being smarter with how zeromq and akka interacted with each other, but at that point we decided it would be better to go with raw zeromq.
Conclusion: Don't use akka if you care about fast serialisation of lots of data over the wire.