Ropsten Ethereum not showing transaction information - blockchain

I am using geth 1.7.3-stable-4bb3c89d to sync with Ropsten network. I started synchronization with fast mode and restarted the service. When I type eth.syncing in geth console it always shows false but the new blocks are being imported.
eth.blockNumber in my node returns 4374961 but when I try to get one of the transaction info from that block then it returns null.
When will the transaction info of the blocks be downloaded in my node? I already removed the test database 3 times and started fresh with fast sync mode. I have 11 peers right now in my node. Do I need to change something to download block info?

I think I was on the different fork than the fork of https://ropsten.etherscan.io/blocks as stated here. I updated my geth server and started syncing with bootnodes available here. Now everything seems updated with Ropsten network.

Related

How Ethereum protocol works with geth

I am new to Ethereum and generally to blockchain. I learned that Ethereum blockchain works on Kademlia. The distributed hash table and its working was beautiful and nicely explained by Eleuth P2P.
Now I used geth to connect to the Ethereum Mainnet and it discovered 2 to 3 maximum peers in 5 to 6 minutes.
Now I know the algorithm but my concern is how the first peer is discovered? Because internet is just a big set of routers and different type of computers (server, computer, etc ) and if you broadcast the discovery like in ARP. The internet will be flooded with these peer discovery broadcast messages and this doesn't seems right. So how initially the connections are made? Also we cannot trust a single network for first time connection because this will make the system server and client based and not decentralised so how the initial connections and peer discovery happens?
Are the broadcast message like have TTL like to prevent the circular loop like in TCP I guess? But this also seems a horrible idea to me.
Please explain.
In order to get going initially, geth uses a set of bootstrap nodes whose endpoints are recorded in the source code.
Source: Geth docs
Here's the list of the bootstrap nodes hardcoded in the Geth source code: https://github.com/ethereum/go-ethereum/blob/v1.10.11/params/bootnodes.go#L23
The --bootnodes option allows you to overwrite this list with your own. Example from the above linked docs:
geth --bootnodes enode://pubkey1#ip1:port1,enode://pubkey2#ip2:port2,enode://pubkey3#ip3:port3

How to store rabbitmq RABBITMQ_MNESIA_DIR on remote disk

We have two ec2 servers. One has the rabbitmq on it. Second one is a new one for storage purposes. Both of these are Amazon Linux 2.
On the second one we just purchased /dev/nvme1n1 70G 104M 70G 1% /data
Where we would love to push our rabbitmq queues and data. Basically we would like to RABBITMQ_MNESIA_DIR setup on the first rabbitmq server to be directly connecting and saving queues in /data remote mentioned.
Currently that is /var/lib/rabbitmq/mnesia and our config file for rabbitmq is just default /etc/rabbitmq/rabbitmq.conf
I wonder if somebody has been doing this before, or can point us in the right direction on how to set RABBITMQ_MNESIA_DIR to be directly connecting to remote ec2 and store and work with queues from there. Thank you
At the end of the day #Parsifal was right.
We ended up making one instance bigger and changed RABBITMQ_MNESIA_DIR
This was bit tricky, because after restarting service rabbitmq-server restart
First off was needed to make sure we had current right to the /data/mnesia we mounted, I managed it with chmod 755 -R /data though read/write should be sufficient based on docs.
Then we were looking for why it always produces the error like this "broker forced connection closure with reason 'shutdown'" & Error on AMQP connection so it was after the start.
So I figured and checked the ownership of the current mnesia dir and the new one. And turned out the user and group was root root compared to original one.
Switched it to drwxr-xr-x 4 rabbitmq rabbitmq 97 Dec 16 14:57 mnesia and this started working.
Maybe it will save you some headaches, I didn't realize there was a different user group for rabbitmq, since I didn't create it.
Only thing to add, is once you are shifting the current working mnesia you might consider copying the directory to the new one since there is a lot of stuff that was currently being used and ran from. I tried it without it and even the password to admin didn't work :D

Data on Private Ethereum blockchain lost/disappears after couple of days

I am deploying a private ethereum blockchain (geth) on a virtual machine on Azure. Upon deploying my Solidity contracts on the blockchain and launching my NodeJS application to it, I am able to add data normally through web apis of the nodejs Loopback App and everything is working fine and I can see the added data using the GET apis.
However, after 1-2-3 days (random) I am not able to retrieve the data I added through my GET apis, while am still able to add new data which confirms that Geth is running fine and wasn't interrupted.
I am running geth using:
geth --datadir ./myDataDir --rpc --networkid 1441 console 2>> myEth.log
myEth.log isn't showing anything wrong, nodejs logs are clean as well.
eth.syncing shows false which means the network is synced.
size of myDataDir folder is still increasing so logically data should be somewhere there but it's not showing.
This is not a private blockchain!
--networkid 1441
This only says that you communicate with clients that also run a network with ID 1441. It might be unlikely, but if someone else runs a network with ID 1441, this node will connect to your node just fine. And in case, the other network with the same ID has a longer (more "heavier") chain, this overwrites your local chain.
To avoid this, try a more random network ID, maybe 7-9 digits, and disable discovery with
--nodiscovery
Or just use the --dev preset.

Changes to ignite cluster membership unexplainable

I am running a 12 node jvm ignite cluster. Eeach jvm runs on its own vmware node. I am using zookeeper to keep these ignite nodes in sync using tcp discovery. I have been seeing lot of node failures in zookeeper logs
although the java processes are running, I don't know why some ignite nodes leave the cluster with "node failed" kind of errors. Vmware uses vmotion to do something what they call as "migration".I am assuming that is some kind of filesystem sync process between vmware nodes.
I am also seeing pretty frequent "dumping pending object" and "Failed to wait for partition map exchange" kind of messages in the jvm logs for ignite.
My env setup is as follows:
Apache Ignite 1.9.0
RHEL 7.2 (Maipo) runs on each of the 12 nodes
Oracle Jdk1.8.
Zookeeper 3.4.9
Please let me know your thoughts.
TIA
There are generally two possible reasons:
Memory issues. For example, if a node goes to long GC pause, it can become unresponsive and therefore removed from topology. For more details read here: https://apacheignite.readme.io/docs/jvm-and-system-tuning
Network connectivity issues. Check if the network between your VMs is stable. You may also want to try increasing the failure detection timeout: https://apacheignite.readme.io/docs/cluster-config#failure-detection-timeout
VM Migrations sometimes involve suspending the VM. If the VM is suspended, it won't have a clean way to communicate with the rest of the cluster and will appear down.

How can I add a node to an existing network?

I created a network with 4 peers using docker-compose and docker for Mac.
I deploy my blockchain on this network successfully.
Now I'm launching a 5th peer using another yml file using the details of one of the previous peer as discovery node.
It appears in the list returned by http://localhost:7050/network/peers however my blockchain is not deployed on this peer and I cannot use it to process transactions.
Do I have to deploy the chaincode again on this peer? Did I miss something?
This is limitation in Fabric’s versions 0.5 and 0.6
Network configuration cannot be changed in realtime. In case If you use PBFT consensus, network configuration is hardcoded in:
“fabric/consensus/pbft/config.yaml"
# Maximum number of validators/replicas we expect in the network
# Keep the "N" in quotes, or it will be interpreted as "false".
"N": 4
The challenge is in updating configuration on all peers synchronously, otherwise they will not be able to reach consensus.
In one of next Fabric versions this configuration’s parameter will be moved to blockchain and it will be possible to add new peers and modify consensus configuration on the fly.
Update for question in comment:
Saw only this high level Roadmap proposal: