gRPC C++, client: "14: Connect Failed" - c++

We are running "helloworld" example from https://grpc.io/docs/quickstart/cpp.html#update-a-grpc-service and we received the following ERROR:
14: Connect Failed
Greeter received: RPC failed.
The server and the client are listening on: 0.0.0.0:50051. The Server is running.
First we receive just a packet on the server and the client crashes, I checked it with tcpdump. We checked on different hosts as well as on the same host but it didn't work for either cases.
Should we change a different IP or different Port number?

I got the same issue on my PC(OS: ubuntu 16.04 LTS, protobuf 3.4.0)
so I search for the reason and I found this:
Reason
If on a linux machine, the environment has the usual "http_proxy" environment variable configured, gRPC will take that into account when trying to connect, however, will then proceed to ignore the companion no_proxy setting:
For example:
$ env
http_proxy=http://106.1.216.121:8080
no_proxy=localhost,127.0.0.1
$ ./greeter_client
D0306 16:00:11.419586349 1897 combiner.c:351] C:0x25a9290 finish old_state=3
D0306 16:00:11.420527744 1896 tcp_client_posix.c:179] CLIENT_CONNECT: ipv4:106.1.216.121:8080: on_writable: error="No Error"
D0306 16:00:11.420567382 1896 combiner.c:145] C:0x25a69a0 create
D0306 16:00:11.420581887 1896 tcp_client_posix.c:119] CLIENT_CONNECT: ipv4:106.1.216.121:8080: on_alarm: error="Cancelled"
I0306 16:00:11.420617663 1896 http_connect_handshaker.c:319] Connecting to server 127.0.0.1:50051 via HTTP proxy ipv4:106.1.216.121:8080
Basically, it's using the http_proxy url to connect even though localhost is in the no_proxy list. Since the default for no_proxy includes localhost on most linux machines; the end result is that any user with an http_proxy configured will never be able to connect to localhost. --- [1]
Other solution
You can enable grpc tracing with
export GRPC_TRACE=all && ./greeter_server and same thing for the client.
Verification
Terminal 1
Terminal 2
That should do the trick
ps. for more information about GRPC_TRACE - gRPC environment variables
Reference
gRPC doesn't respect the no_proxy environment variable

Related

How to fix distcc error

I'm trying to get distcc working between two machines CLIENT and SERVER I "think" I have it setup right but I am still getting this error
(dcc_build_somewhere) Warning: failed to distribute, running locally
instead
NOTHING is being compiled on the server.
My configuration is as follows
CLIENT = 192.168.0.14
SERVER = 192.168.0.15
/etc/default/distcc on SERVER
STARTDISTCC="true"
ALLOWEDNETS="192.168.0.0/24" // Also tried CLIENT IP here
LISTENER="192.168.0.15" // SERVER IP
NICE="10"
JOBS="16"
ZEROCONF="false"
client - yes I know that its set to only compile on the server currently
DISTCC_HOSTS="192.168.0.15"
/etc/distcc/host set to 192.168.0.15
$HOME/.distcc/host set to 192.168.0.15
command
make -jx CC=distcc
I have tried on different software repos to see if there was some problem with an individual repo but the problem persists no matter the package.
EDIT
The failed to distribute error is a client side error. Server side the log indicates
distccd[1046] (dcc_job_summary) client: 192.168.0.14:40732 COMPILE_ERROR exit:1 sig:0 core:0 ret:0 time:94ms gcc certs/system_keyring.c
I fixed this by upgrading my version of GCC. The client and Server are now running 5.x.
Check the log:
DISTCCD_OPTS="${DISTCCD_OPTS} --log-file /var/log/distccd.log"
In my case, my log said:
(dcc_warn_masquerade_whitelist) CRITICAL! /usr/local/lib/distcc not found. You must see up masquerade (see distcc(1)) to list whitelisted compilers or pass --enable-tcp-insecure. To set up masquerade automatically run update-distcc-symlinks.
So I had to run:
sudo update-distcc-symlinks
sudo ln -s /usr/lib/distcc /usr/local/lib/distcc # because I compiled from source

Installing and Viewing Neo4j on Existing AWS EC2 Instance

I'm trying to install the enterprise edition of neo4j on an existing EC2 (Amazon linux) instance. So far I've
wget "link to enterprise"
untar the file
renamed and moved the folder to NEO4J_HOME
then went into the config files for neo4j.properties to make the following changes:
# Enable shell server so that remote clients can connect via Neo4j shell.
remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces)
remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337
remote_shell_port=1337
EDITED Christophe Willemsen pointed out that for my original error, I had forgotten to restart the server at that point but I was still unable to access the web server while it was running. So to make it more clear, I've edited the remaining post:
I went to neo4j-server.properties and uncommented:
org.neo4j.server.webserver.address=0.0.0.0
And start the server
NEO4J_HOME/bin/neo4j start
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow
Starting Neo4j Server...WARNING: not changing user
process [28557]... waiting for server to be ready..... OK.
http://localhost:7474/ is ready.
checking the status:
NEO4J_HOME/bin/neo4j status
Neo4j Server is running at pid 28557
I can run the shell but the when I go to localhost 7474 I still can not connect
Any help would be appreciative. The only tutorial or help I've found assumed I was starting from scratch with a new instance. If someone could provide some instructions for installing or fix my configuration that would be great.
Thanks!
You have to edit neo4j-server.properties and uncomment the line with:
org.neo4j.server.webserver.address=0.0.0.0
So that the db listens on an external interface not just localhost, and you have to open the port (7474) in your firewall rules.
Make sure to secure access to the db though:
http://neo4j.com/docs/stable/security-server.html

WSO2 Identity Server startup error - Error in initializing thrift transport

I just installed WSO IS and receive this error when starting the server.
ERROR {org.wso2.carbon.identity.entitlement.internal.EntitlementServiceComponent}
- Failed to initialize Entitlement Service
ERROR {org.wso2.carbon.identity.entitlement.internal.EntitlementServiceComponent}
- Error in initializing thrift transport
org.apache.thrift.transport.TTransportException: Could not bind to port 10500
I've verified nothing is using the 10500 port.
In order to test WSO2 locally on my development workstation, I hard-coded a hostname via the hosts file and modified the HostName and MgtHostName entities in ./repository/conf/carbon.xml. One caveat is that my workstation has a dynamic IP address. When the IP does occasionally change, leaving the hostname pointing to an unresponsive IP, starting WSO2 IS fails with the exact same error.
Repairing the host entry and restarting the WSO2 Identity Server resolves it.
Not sure but you can try this:
1.Kill the java process if windows system or in Linux check if another process running
already which was not shut down properly by searching using ps -ef | grep java.
2.Restart the IS server.

Setting up JMeter for Distributed testing in AWS with connectivity issues

I have to do distributed testing using JMeter. The objective is to have multiple remote servers in AWS controlled by one local server send a file download request to another server in AWS.
How can I set up the different servers in AWS?
How can I connect to them remotely?
Can someone provide some step by step instructions on how to do it?
I have tried several things but keep running into connectivity issues across networks.
We had a similar task and we ran into a bunch of issues as well. Here are the details of the whole process and what we did to resolve the issues we encountered. Hope it helps.
We needed to send requests from 5 servers located in various regions of the world. So we launched 5 micro instances in AWS, each in a different region. We chose the regions to be as geographically apart as possible.
Remote (server) JMeters config
Here is how we set up each instance.
Installed java:
$ sudo apt-get update
$ sudo apt-get install default-jre
Installed JMeter:
$ mkdir jmeter
$ cd jmeter;
$ wget ftp://apache.mirrors.pair.com//jmeter/binaries/apache-jmeter-2.9.tgz
$ gunzip apache-jmeter-2.9.tgz;tar xvf apache-jmeter-2.9.tar
Edited the jmeter.properties file in the /bin folder of the JMeter installation and uncomment the line containing the server.rmi.localport setting. We changed the port to 50000.
server.rmi.localport=50000
Started JMeter server. Make sure the address and the port the server reports listening to are correct.
$ cd ~/jmeter/apache-jmeter-2.9/bin
$ vi jmeter-server
Local (client) JMeter config
Then we set up JMeter to run tests remotely on these instances on our local client machine:
Ensured to use the same version of JMeter as was running on the servers. Installed Java and JMeter as described above.
Enabled remote testing by editing the jmeter.properties file that can be found in the bin folder of the JMeter installation. The parameter remote_hosts needed to be set with the public DNS of the remote servers we were connecting to.
remote_hosts=54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x
We were now able to tell our client JMeter instance to run tests on any or all of our specified remote servers.
Issues and resolutions
Here are the issues we encountered and how we resolved them:
The client failed with:
ERROR - jmeter.engine.ClientJMeterEngine: java.rmi.ConnectException: Connection - refused to host: 127.0.0.1
It was due to the server host returning the private IP address as its address because of Amazon NAT.
We fixed this by setting the parameter RMI_HOST_DEF that the /usr/local/jmeter/bin/jmeter-server script includes in starting the server:
RMI_HOST_DEF=-Djava.rmi.server.hostname=54.xx.xx.xx
Now, the AWS instance returned the server’s external IP, and we could start the test.
When the server node attempted to return the result and tried to connect to the client, the server tried to connect to the external IP address of my local machine. But it threw a connection refused error:
2013/05/16 12:23:37 ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: xxx.xxx.xxx.xx;
We resolved this issue by setting up reverse tunnels at the client side.
First, we edited the jmeter.properties file in the /bin folder of the JMeter installation and uncommented the line containing the client.rmi.localport setting. We changed the port to 60000:
client.rmi.localport=60000
Then we connected to each of the servers using SSH, and setup a reverse tunnel to port 60000 on the client.
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -R 60000:localhost:60000 ubuntu#54.x.x.x
We kept each of these sessions open, as the JMeter server needs to be able to deliver the test results to the client.
Then we set up the JVM_ARGS environment variable on the client, in the jmeter.sh file in the /bin folder:
export JVM_ARGS="-Djava.rmi.server.hostname=localhost"
By doing this, JMeter will tell the servers to connect to localhost:60000 for delivering their results. This ends up being tunneled back to the client.
The SSH connections to the servers kept dropping after staying idle for a little bit. To prevent that from happening, we added a parameter to each of the SSH tunnel set up directing the client to wait 60 seconds before sending a null packet to the server to keep the connection alive:
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -o ServerAliveInterval=60 -R 60000:localhost:60000 ubuntu#54.x.x.x
(.ssh/config version of all required SSH settings:
Host 54.x.x.x
HostName 54.x.x.x
Port 22
User ubuntu
ServerAliveInterval 60
RemoteForward 127.0.0.1:60000 127.0.0.1:60000
IdentityFile ~/.ssh/54-x-x-x.us-east.pem
IdentitiesOnly yes
Just use ssh 54.x.x.x after setting this up.
)
I just went though this on openstack and found the same issues... no idea why the jmeter remoting documentation only covers half the required steps. You can do it without tunnels or touching the properties files.
You need
All nodes to advertise their public IP - on AWS/OS this defaults to the private IP
Ingress rules for the RMI port which defaults to 1099 - I use this
Ingress rules for the RMI "local" port which defaults to dynamic. Below I use 4001 for the client and 4000 for servers. The port can be the same but note the properties are different.
If you are using your workstation as the client you probably still need tunnels. Above Archana Aggarwal has good tips for tunnels.
Remote servers
Set java.rmi.server.hostname and server.rmi.localport inline or in the properties file.
jmeter-server -Djava.rmi.server.hostname=publicip -Dserver.rmi.localport=4000
Sneaky server on client
You can also run one on the same machine as the client. For clarity I've set java.rmi.server.hostname but left server.rmi.localport as dynamic
jmeter-server -Djava.rmi.server.hostname=localip
Client
Set java.rmi.server.hostname and client.rmi.localport inline or in the properties file. Use -R etc like so:
jmeter -n -t Test.jmx -Rremotepublicip1,remotepublicip2 -Djava.rmi.server.hostname=clientpublicip -Dclient.rmi.localport=4001 -GmypropA=1 -GmypropB=2 -lresults.jtl
When you go for distributed testing using JMeter in AWS, I would suggest you to use docker - which will help us with jmeter test infrastructure very quickly. This way we can also ensure that same version of java and jmeter are installed in all the instances of amazon which is very important of JMeter distributed testing.
Ensure that - you set below properties and ports are open for jmeter-server. [they do not have to be 1099,50000 exactly]
server.rmi.localport=50000
server_port=1099
java.rmi.server.hostname=SERVER_IP
for client
client.rmi.localport=60000
java.rmi.server.hostname=SERVER_IP - this step is very important as the container in aws instance will have their own IP address in the docker network - so master and slave can not communicate. So we explicitly set this property
More info:
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-in-aws/

Problems getting RabbitMQ and Django-Celery Running: Target Machine actively refused connection

I am trying to get Django-Celery running on my Django App. I cannot get the worker server to run. When I try I get the message: No Connection could be made because the target machine actively refused it
Here is what I have done so far. First, I installed the django celery package: http://pypi.python.org/pypi/django-celery
I can load it into python without problems. I also installed the RabbitMQ server per the windows install instructions: http://www.rabbitmq.com/install.html#windows
Starting the tutorials in pytho on the RabbitMQ site I saw the need to install pika: http://pypi.python.org/pypi/pika. It imports without any problems.
From there I start the RabbitMQ server by running this at the command line: rabbitmq-service start
I get the message back that Service RabbitMQ started
Here is where I start to have problems.
I attempted the first steps in django-celery: http://packages.python.org/django-celery/getting-started/first-steps-with-django.html and the "hello world" example on the rabbitMQ site: http://www.rabbitmq.com/tutorials/tutorial-one-python.html
In both cases I get the message: No Connection could be made because the target machine actively refused it
My first thought was that this sounded like a firewall problem. So I went into the windows 7 firewall and added inbound and outbound rules to open the local and remote ports 5672 and 5673 to TCP protocol, but I still get the same error message.
When I run rabbitmqctl status i get the message:
Error: unable to connect to node 'rabbit#hostname': nodedown
diagnostics:
- nodes and their ports on hostname: [{rabbitmqctl18856, 505031}]
Does that mean it that it is trying to operate on those ports? what about the default 5672?
Any suggestions?
UPDATE: This was actually a problem resulting from several failed rabbitmq installs conflicting with the latest installation. If you have to remove rabbitmq use the 'rabbitmq-service remove' command and not SC DELETE, which cause a lot of problems for me and I had to go in and clean up my windows registry file.
The nodedown error indicated by rabbitmqctl suggests that the server isn't running on that machine.
Try going though the steps in RabbitMQ's troubleshooting guide. In particular, pay close attention to the logs. Has the server crashed for some reason? Could you post the logs somewhere?