Tendermint : get error when start tendermint node as per the sample - blockchain

I tried to create a application as per the guild in the tendermint document.
After I started the application and tendermint node, getting the below error. I use go version go1.15.5 linux/amd64 and tednermint v0.34.0-rc4-148-g095e9cd.
[bc#localhost kvstore]$ TMHOME="/tmp/example" tendermint node --proxy_app=unix://example.sock
I[2020-12-01|13:16:59.697] Version info module=main software=v0.34.0-rc4-148-g095e9cd block=11 p2p=8
I[2020-12-01|13:16:59.702] Starting Node service module=main impl=Node
I[2020-12-01|13:16:59.703] Starting StateSync service module=statesync impl=StateSync
I[2020-12-01|13:16:59.738] Started node module=main nodeInfo="{ProtocolVersion:{P2P:8 Block:11 App:0} DefaultNodeID:3cf5ea6219c57fd906c042f767748988ba070db7 ListenAddr:tcp://0.0.0.0:26656 Network:test-chain-1mrgVg Version:v0.34.0-rc4-148-g095e9cd Channels:40202122233038606100 Moniker:localhost.localdomain Other:{TxIndex:on RPCAddress:tcp://127.0.0.1:26657}}"
E[2020-12-01|13:17:00.758] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=consensus
E[2020-12-01|13:17:00.758] consensus connection terminated. Did the application crash? Please restart tendermint module=proxy err="read message: EOF"
E[2020-12-01|13:17:00.758] Error in proxyAppConn.BeginBlock module=state err="read message: EOF"
E[2020-12-01|13:17:00.758] Error on ApplyBlock module=consensus err="read message: EOF"
I[2020-12-01|13:17:00.758] captured terminated, exiting... module=main
I[2020-12-01|13:17:00.758] Stopping Node service module=main impl=Node
I[2020-12-01|13:17:00.758] Stopping Node module=main
I[2020-12-01|13:17:00.760] Stopping StateSync service module=statesync impl=StateSync
I[2020-12-01|13:17:00.760] Closing rpc listener module=main listener="&{Listener:0xc00000d440 sem:0xc000039200 closeOnce:{done:0 m:{state:0 sema:0}} done:0xc000039260}"
E[2020-12-01|13:17:00.760] Error serving server module=main err="accept tcp 127.0.0.1:26657: use of closed network connection"
KVStore
[bc#localhost kvstore]$ ./example
badger 2020/12/01 13:16:55 INFO: All 0 tables opened in 0s
badger 2020/12/01 13:16:55 INFO: Replaying file id: 0 at offset: 0
badger 2020/12/01 13:16:55 INFO: Replay took: 6.807µs
badger 2020/12/01 13:16:55 DEBUG: Value log discard stats empty
I[2020-12-01|13:16:55.373] Starting ABCIServer service impl=ABCIServer
I[2020-12-01|13:16:55.405] Waiting for new connection...
I[2020-12-01|13:16:59.692] Accepted a new connection
I[2020-12-01|13:16:59.692] Waiting for new connection...
I[2020-12-01|13:16:59.692] Accepted a new connection
I[2020-12-01|13:16:59.692] Waiting for new connection...
I[2020-12-01|13:16:59.692] Accepted a new connection
I[2020-12-01|13:16:59.692] Waiting for new connection...
I[2020-12-01|13:16:59.692] Accepted a new connection
I[2020-12-01|13:16:59.692] Waiting for new connection...
E[2020-12-01|13:17:00.758] Connection error err="error reading message: proto: wrong wireType = 2 for field Height"
E[2020-12-01|13:17:00.761] Connection was closed by client
E[2020-12-01|13:17:00.761] Connection was closed by client
E[2020-12-01|13:17:00.761] Connection was closed by client
go.mod
module github.com/me/example
go 1.15
require (
github.com/dgraph-io/badger v1.6.2
github.com/tendermint/tendermint v0.34.0-rc4
)

Related

Heroku Redis Error while reading from socket: (104, 'Connection reset by peer')

I'm running a Heroku Redis instance that always worked fine, but after the upgrade when I try to run celery workers I get the following error
[2021-07-23 11:06:08,135: ERROR/MainProcess] consumer: Cannot connect to redis://:**#ec2-54-***-***-*.eu-west-1.compute.amazonaws.com:*****//: Error while reading from socket: (104, 'Connection reset by peer').
Trying again in 12.00 seconds... (6/100)
Everything seems to be up to date and I can't seem to figure out how to run it again. I tried to drop the redis instance and create it from scratch but nothing.

Error while launching tendermint node with "tendermint init"

I am launching tendermint with the command "tendermint init" followed by "tendermint node". But it is showing an error message as follows:
tendermint node
I[04-06|20:44:11.141] Starting multiAppConn module=proxy impl=multiAppConn
I[04-06|20:44:11.141] Starting socketClient module=abci-client connection=query impl=socketClient
E[04-06|20:44:11.142] abci.socketClient failed to connect to tcp://localhost:26658. Retrying... module=abci-client connection=query
E[04-06|20:44:14.143] abci.socketClient failed to connect to tcp://localhost:26658. Retrying... module=abci-client connection=query
E[04-06|20:44:17.143] abci.socketClient failed to connect to tcp://localhost:26658. Retrying... module=abci-client connection=query
I am not able to figure out where I am making mistake. Please help.

Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused

I am using RabbitMQ server with Django 1.8 on CentOS. When I restart the rabbitmq-server, the operation completes and shows the message "restart ok". But when I see the status it shows following output:
Starting node rabbit#bynrySystem ...
Error: unable to connect to node rabbit#bynrySystem: nodedown
DAIGNOSTICS
===========
attempted to contact: [rabbit#bynrySystem]
rabbit#bynrySystem:
* connected to epmd (port 4369) on bynrySystem
* epmd reports: node 'rabbit' not running at all
no other nodes on bynrySystem
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-59#bynrySystem'
- home dir: /var/lib/rabbitmq
- cookie hash: f/MoFCCKTONVCYhIDLxvew==
When I run a task it gives following error.
consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
A common cause for this error is that the cookie hash is incorrectly set after a shutdown. If you don't have any valuable data or definitions in your RabbitMQ node, just stop the service and remove /var/lib/rabbitmq/*, then start it back.
sudo rm -rf /var/lib/rabbitmq/*
This resets the node, so it deletes all messages.
What worked for me was the rabbitmqctl reset
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
rabbitmqctl list_users
resulted in
Listing users ...
guest [administrator]
ref enter link description here
This will work for the default configuration. However, seems a better practice to create a vhost with user with permissions to access that vhost.

Spark shuts down after 10 seconds of running

I'm trying to setup clusters in my AWS account (Amazon). I followed this tutorial to set it up. I've ran into some problems regarding ports but I finally got it to work until... it shut down after 10 seconds giving me no more than this error:
16/05/12 12:52:46 INFO client.AppClient$ClientActor: Connecting to master spark://ip-to-my-machine:7077...
16/05/12 12:53:06 INFO client.AppClient$ClientActor: Connecting to master spark://ip-to-my-machine:7077...
16/05/12 12:53:26 ERROR cluster.SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
16/05/12 12:53:26 ERROR scheduler.TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
This was the bash I ran to make it work:
bin/spark-shell --master spark://ip-to-my-machine:7077
I opened the TCP port 7077, what seems to be the problem?

Spark 0.90 Stand alone connection refused

I am using spark 0.90 stand alone mode.
When I tried with a streaming application in stand alone mode, I am getting a connection refused exception.
I added hostname in /etc/hosts also tried with IP alone. In both cases worker got registered with master without any issues.
Is there a way to solve this issue?
14/02/28 07:15:01 INFO Master: akka.tcp://driverClient#127.0.0.1:55891 got disassociated, removing it.
14/02/28 07:15:04 INFO Master: Registering app Twitter Streaming
14/02/28 07:15:04 INFO Master: Registered app Twitter Streaming with ID app-20140228071504-0000
14/02/28 07:34:42 INFO Master: akka.tcp://spark#127.0.0.1:33688 got disassociated, removing it.
14/02/28 07:34:42 INFO LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkMaster/deadLetters] to Actor[akka://sparkMaster/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkMaster%4010.165.35.96%3A38903-6#-1146558090] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
14/02/28 07:34:42 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster#10.165.35.96:8910] -> [akka.tcp://spark#127.0.0.1:33688]: Error [Association failed with [akka.tcp://spark#127.0.0.1:33688]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark#127.0.0.1:33688]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: /127.0.0.1:33688
I had a similar issue when running in Spark in cluster mode. My problem was that the server was started with the hostname 'fluentd:7077' and not the FQDN. I edited the
/sbin/start-master.sh
to reflect how my remote nodes connect with the -ip flag.
/usr/lib/jvm/jdk1.7.0_51/bin/java -cp :/home/vagrant/spark-0.9.0-incubating-bin- hadoop2/conf:/home/vagrant/spark-0.9.0-incuba
ting-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar -Dspark.akka.logLifecycleEvents=true -Djava.library.path= -Xms512m -Xmx512m org.ap
ache.spark.deploy.master.Master --ip fluentd.alex.dev --port 7077 --webui-port 8080
Hope this helps.