WARN AEROSPIKE_ERR_CLIENT Socket write error: 111 - in-memory-database

im getting error
$ aql
2015-10-10 15:48:10 WARN AEROSPIKE_ERR_CLIENT Socket write error: 111
Error -1: Failed to seed cluster
can anyone help me out of this.............!

Is Aerospike running on the same host where you did run aql?
If it is, is it listening on the local 127.0.0.1 interface and on the default port (3000)? (feel free to share your configuration).
If it is not, then make sure to specify the host and port when running aql using the -h and -p options: aql -h <ip> -p <port>

The port is not getting freed up even after exiting the vagrant.
Restarting the system freed all the ports and thus the service got up.

Related

SSH reverse port forward on EC2 aws instance

I used to have an ssh reverse port forwarding from my local computer to a remote EC2 AWS server on port 9999. (9999 for both machines.)
It used to work, but I created a new instance, and now it doesn't anymore. (Half working.) I'm not sure what I did to make it work back then... (Or something was changed.)
I have a process running on my computer on port 9999 and I want it to listen to the port 9999 of my EC2.
On my computer, curl "127.0.0.1:9999" is working.
But I want the code curl "ec2-xx-xx-xx-xx-xx.compute.amazonaws.com:9999" to work, for now it doesn't, giving me the error curl: (7) Failed to connect to ec2-xx-xx-xx-xx-xx.compute.amazonaws.com port 9999 after 59 ms: Connection refused
EC2 Security group is set to open 9999 on TCP for 0.0.0.0/0.
I create the forwarded port with the command :
ssh -R 9999:localhost:9999 -i "/home/example/XXX.pem" ubuntu#ec2-xx-xx-xx-xx-xx.compute.amazonaws.com
The connection ssh is established without errors.
Inside this ssh session I can even do curl "127.0.0.1:9999" inside and IT IS WORKING. Reaching my local computer.
But the request from the web isn't... (curl "ec2-xx-xx-xx-xx-xx.compute.amazonaws.com:9999" doesn't work...)
The path is good, if I install apache2 on port 80 curl "ec2-xx-xx-xx-xx-xx.compute.amazonaws.com:80" is working. (port 80 is added the same way to the security group)
I did sudo ufw disable, same problem.
Do you have an idea what I'm missing ?
EDIT : On the ssh -R forward session on the EC2 :
ubuntu#awsserver:~$ php -S 0.0.0.0:9999 -t .
[Wed Dec 14 16:35:11 2022] Failed to listen on 0.0.0.0:9999 (reason: Address already in use)
BUT, if I open a normal ssh session, I can run php -S 0.0.0.0:9999 -t ., the code curl "ec2-xx-xx-xx-xx-xx.compute.amazonaws.com:9999" is working everywhere as expected.
So... it is telling me that the port is already used (By the ssh -R command), but is closed when I try to connect to it... I don't get it.
The answer wasn't EC2/AWS related.
It's a security feature from SSH that I had to disable : GatewayPorts yes

Runnig geth in ec2 gives error, panic: runtime error: invalid memory address or nil pointer dereference

What Iam trying to achieve is to run an ethereum/client-go on an aws EC2
instance and being able to access it from remote client side, for playing around with Rinkeby test network
I am trying to run an geth docker image on ec2 instance on aws.
When I run the docker using the below command, I am getting the following error.
sudo docker run -it -p 8545:8545 -p 30303:30303 ethereum/client-go --rpc --rinkeby --syncmode "fast" --rpc --rpcapi 'db,eth,net,web3,personal' --rpcaddr XXX.XX.XXX.XXX --cache=1024
Where --rpcaddr XXX.XX.XXX.XXX is my Elastic IP
INFO [04-17|10:24:08] Maximum peer count ETH=25 LES=0 total=25
INFO [04-17|10:24:08] Starting peer-to-peer node instance=Geth/v1.8.4-unstable-92c6d130/linux-amd64/go1.10.1
INFO [04-17|10:24:08] Allocated cache and file handles database=/root/.ethereum/rinkeby/geth/chaindata cache=768 handles=1024
INFO [04-17|10:24:08] Writing custom genesis block
INFO [04-17|10:24:08] Persisted trie from memory database nodes=355 size=65.27kB time=1.082517ms gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [04-17|10:24:08] Initialised chain configuration config="{ChainID: 4 Homestead: 1 DAO: <nil> DAOSupport: true EIP150: 2 EIP155: 3 EIP158: 3 Byzantium: 1035301 Constantinople: <nil> Engine: clique}"
INFO [04-17|10:24:08] Initialising Ethereum protocol versions="[63 62]" network=4
INFO [04-17|10:24:08] Loaded most recent local header number=0 hash=6341fd…67e177 td=1
INFO [04-17|10:24:08] Loaded most recent local full block number=0 hash=6341fd…67e177 td=1
INFO [04-17|10:24:08] Loaded most recent local fast block number=0 hash=6341fd…67e177 td=1
INFO [04-17|10:24:08] Regenerated local transaction journal transactions=0 accounts=0
INFO [04-17|10:24:08] Starting P2P networking
INFO [04-17|10:24:10] UDP listener up self=enode://350e33a2680260f24bd1837e59610173769023f6cf609ab59b1aca63dc867cce5d7cb520343ed9a04b8a98d5a7d08f57f9e2ee258502312fafad42d005179aab#[::]:30303
INFO [04-17|10:24:10] IPC endpoint opened url=/root/.ethereum/rinkeby/geth.ipc
INFO [04-17|10:24:10] IPC endpoint closed endpoint=/root/.ethereum/rinkeby/geth.ipc
INFO [04-17|10:24:10] Blockchain manager stopped
INFO [04-17|10:24:10] Stopping Ethereum protocol
INFO [04-17|10:24:10] RLPx listener up self=enode://350e33a2680260f24bd1837e59610173769023f6cf609ab59b1aca63dc867cce5d7cb520343ed9a04b8a98d5a7d08f57f9e2ee258502312fafad42d005179aab#[::]:30303
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xa6e76b]
goroutine 76 [running]:
github.com/ethereum/go-ethereum/eth/filters.(*EventSystem).eventLoop(0xc42c694d00)
/go-ethereum/build/_workspace/src/github.com/ethereum/go-ethereum/eth/filters/filter_system.go:434 +0x2eb
created by github.com/ethereum/go-ethereum/eth/filters.NewEventSystem
/go-ethereum/build/_workspace/src/github.com/ethereum/go-ethereum/eth/filters/filter_system.go:113 +0x104
Can anyone help, what is causing the above issue?
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xa6e76b]
When I run the same docker with --rpcaddr 127.0.0.1 it works fine, but cannot access from remote client,
sudo docker run -it -p 8545:8545 -p 30303:30303 ethereum/client-go --rpc --rinkeby --syncmode "fast" --rpc --rpcapi 'db,eth,net,web3,personal' --rpcaddr 127.0.0.1 --cache=1024
I have tried using,
Public DNS (IPv4),
IPv4 Public IP and
Elastic IP
for the --rpcaddr values
Also I have given the security permission in aws TCP inbound and outbound ports.
Am i doing this right? Is this the right way to run web3 provider?
You should be using volumes in my opinion because the client will try download the ethereum blockchain, but with your current setup there is nowhere to save the blockchain.
Have a look at this page:
https://github.com/ethereum/go-ethereum/wiki/Running-in-Docker
"To persist downloaded blockchain data between container starts, use Docker data volumes. Replace /path/on/host with the location you want to store the data in."
$ docker run -it -p 30303:30303 -v /path/on/host:/root/.ethereum ethereum/client-go
Try see if this helps. If you still have issues I'll be happy to try help or consider looking over the issues on github for geth. I saw someone got a similar error in 2017 and logged an issue see here:
https://github.com/ethereum/go-ethereum/issues/15079

Confd error: ERROR 501: All the given peers are not reachable (Tried to connect to each peer twice and failed) [0]

While debugging I realised that confd doesn't pick up the keys and my journal looks like this:
Sep 18 18:31:50 ip-10-171-54-76.ec2.internal docker[24891]: [nginx] waiting for confd to refresh nginx.conf
Sep 18 18:31:56 ip-10-171-54-76.ec2.internal docker[24891]: 2014-09-18T18:31:56Z 9122c7a54edc confd[9572]: ERROR 501: All the given peers are not reachable (Tried to connect to each peer twice and failed) [0]
I use nsenter to log in to the running container to run some experiments for debugging purposes. I ran this command
confd -onetime -node 172.17.42.1:4001 -config-file /etc/confd/conf.d/nginx.toml
Then received this error as above
confd[12894]: ERROR 501: All the given peers are not reachable (Tried to connect to each peer twice and failed) [0]
I am totally clueless at this point. I am using EC2 with the stable version of CoreOS and I am sure that etcd is running on the host. Also, I can ping the host from inside the container successfully.
Any ideas on what's wrong?
Assistance will be much appreciated.
This error indicates that your etcd cluster isn't operating correctly, so confd has nothing to watch. It has probably lost quorum. The logs (journalctl -u etcd) should indicate what happened.

Can't connect to VM running Django

Using VirtualBox, I have a NAT enabled VM running Centos 7. The host OS is Windows 7. I can't seem to access the Django web server running inside the VM. What am I missing?
I have two port forwarding rules set for the Virtual Machine:
I start the Django web server on the guest OS with:
python manage.py runserver 0.0.0.0:8000
And I try to visit the webpage on the host OS at:
http://localhost:8000
Google Chrome gives me the error code ERR_CONNECTION_RESET.
The result of curl on the host OS:
[user#win7 ~ ]$ curl http://localhost:8000
curl: (56) Recv failure: Connection reset by peer
Here is the result of a netstat performed on the guest OS:
[user#vm ~ ]$ netstat -na | grep 8000
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
Here is the result of a netstat performed on the host OS (with Cygwin):
[user#win7 ~ ]$ netstat -na | grep 8000
TCP 0.0.0.0:8000 0.0.0.0:0 LISTENING
It is also worth mentioning that the SSH rule works. I can SSH into the machine with no problems.
This is not a solution, but a work-around for my problem. Maybe this will help anyone encountering a problem similar to mine, and just wants to be able to connect to their VM's webserver.
Since SSH was working, I figured I could access the webpage via a SSH Tunnel. The syntax for doing so via command line is:
ssh -L <local-port>:<remote-host>:<remote-port>
So in my situation, if I wanted to open a tunnel via command line I would do:
ssh -L 8000:127.0.0.1:8000
This would allow me to browse to http://localhost:8000 and access the website.
You can also do this via PuTTY, but I won't explain that here, so just Google for a guide.
The ssh tunnel is an OK work around, but the problem is almost certainly CentOS 7 which now uses firewalld rather than iptables to manager access. And, unlike iptables the default configuration is quite restrictive.
if
ps -ae | grep firewall
returns something like
602 ? 00:00:00 firewalld
your system is running firewalld, not iptables. They do not run together.
To correct your VM so you can access your django site from the host use the commands:
firewall-cmd --zone=public --add-port=8000/tcp --permanent
firewall-cmd --reload
Many thanks to pablo v in the post "Access django server on virtual Machine" for pointing this out.

Apache Spark - Connection refused for worker

Hi I was new to apache spark and i was trying to learn it
While creating a new standalone cluster I met with this error.
I started my master and it is active in port 7077, i can see that in the ui (port 8080)
While startting the server using the command
./bin/spark-class org.apache.spark.deploy.worker.Worker spark://192.168.0.56:7077
I am meeting with a connection refused error
14/07/22 13:18:30 ERROR EndpointWriter: AssociationError [akka.tcp://sparkWorker#node- physical:55124] -> [akka.tcp://sparkMaster#192.168.0.56:7077]: Error [Association failed with [akka.tcp://sparkMaster#192.168.0.56:7077]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkMaster#192.168.0.56:7077]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: /192.168.0.56:7077
Please help me with the error i am sruck here for a long time.
I hope the information is enough. Please help
In my case, I went to /etc/hosts and :
removed the line with 127.0.1.1 and it worked.
wrote "MASTER_IP MACHINE_NAME"
Try "./sbin/start-master -h ". It works, when I specify the host name as IP address.
Change the SPARK_MASTER_HOST=< ip> in the spark-env.sh of the master node.
Then restart the master, if you grep the process you will see it changes from
java -cp /spark/conf/:/spark/jars/* -Xmx1g org.apache.spark.deploy.master.Master --host < HOST NAME> --port 7077 --webui-port 8080
to
java -cp /spark/conf/:/spark/jars/* -Xmx1g org.apache.spark.deploy.master.Master --host < HOST IP> --port 7077 --webui-port 8080
Check if your firewall is turned off as it might be blocking the worker connection by either turning off the firewall temporarily:
$ sudo service iptables stop
or permanently:
$ sudo chkconfig iptables off
it seems like spark is very picky about IP and machine names. so, when starting your master, it will use your machine name to register spark master. if that name is not reachable from your workers, it will be almost impossible to reach.
a way to solve it, is to start your master like this:
SPARK_MASTER_IP=YOUR_SPARK_MASTER_IP ${SPARK_HOME}/sbin/start-master.sh
then, you will be able to connect your slaves like this
${SPARK_HOME}/sbin/start-slave.sh spark://YOUR_SPARK_MASTER_IP:PORT
i hope it helps!
did you add the entries of master and worker nodes in etc/hosts, if not add every machines ip and host name mappings in all the machines.
For Windows: spark-class org.apache.spark.deploy.master.Master -h [Interface IP to bind to]
I had the similar problem in a docker container, I solved it by setting the IP for master and driver as localhost, specifically:
set('spark.master.hostname' ,'localhost')
set('spark.driver.hostname', 'localhost')
I do not have a DNS and I added entries in /etc/hosts in the master node to refer to the IPs and hostnames of all master and worker nodes. In worker nodes, I added the IP and hostname of the master node in /etc/hosts.