Setting up JMeter for Distributed testing in AWS with connectivity issues - amazon-web-services

I have to do distributed testing using JMeter. The objective is to have multiple remote servers in AWS controlled by one local server send a file download request to another server in AWS.
How can I set up the different servers in AWS?
How can I connect to them remotely?
Can someone provide some step by step instructions on how to do it?
I have tried several things but keep running into connectivity issues across networks.

We had a similar task and we ran into a bunch of issues as well. Here are the details of the whole process and what we did to resolve the issues we encountered. Hope it helps.
We needed to send requests from 5 servers located in various regions of the world. So we launched 5 micro instances in AWS, each in a different region. We chose the regions to be as geographically apart as possible.
Remote (server) JMeters config
Here is how we set up each instance.
Installed java:
$ sudo apt-get update
$ sudo apt-get install default-jre
Installed JMeter:
$ mkdir jmeter
$ cd jmeter;
$ wget ftp://apache.mirrors.pair.com//jmeter/binaries/apache-jmeter-2.9.tgz
$ gunzip apache-jmeter-2.9.tgz;tar xvf apache-jmeter-2.9.tar
Edited the jmeter.properties file in the /bin folder of the JMeter installation and uncomment the line containing the server.rmi.localport setting. We changed the port to 50000.
server.rmi.localport=50000
Started JMeter server. Make sure the address and the port the server reports listening to are correct.
$ cd ~/jmeter/apache-jmeter-2.9/bin
$ vi jmeter-server
Local (client) JMeter config
Then we set up JMeter to run tests remotely on these instances on our local client machine:
Ensured to use the same version of JMeter as was running on the servers. Installed Java and JMeter as described above.
Enabled remote testing by editing the jmeter.properties file that can be found in the bin folder of the JMeter installation. The parameter remote_hosts needed to be set with the public DNS of the remote servers we were connecting to.
remote_hosts=54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x
We were now able to tell our client JMeter instance to run tests on any or all of our specified remote servers.
Issues and resolutions
Here are the issues we encountered and how we resolved them:
The client failed with:
ERROR - jmeter.engine.ClientJMeterEngine: java.rmi.ConnectException: Connection - refused to host: 127.0.0.1
It was due to the server host returning the private IP address as its address because of Amazon NAT.
We fixed this by setting the parameter RMI_HOST_DEF that the /usr/local/jmeter/bin/jmeter-server script includes in starting the server:
RMI_HOST_DEF=-Djava.rmi.server.hostname=54.xx.xx.xx
Now, the AWS instance returned the server’s external IP, and we could start the test.
When the server node attempted to return the result and tried to connect to the client, the server tried to connect to the external IP address of my local machine. But it threw a connection refused error:
2013/05/16 12:23:37 ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: xxx.xxx.xxx.xx;
We resolved this issue by setting up reverse tunnels at the client side.
First, we edited the jmeter.properties file in the /bin folder of the JMeter installation and uncommented the line containing the client.rmi.localport setting. We changed the port to 60000:
client.rmi.localport=60000
Then we connected to each of the servers using SSH, and setup a reverse tunnel to port 60000 on the client.
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -R 60000:localhost:60000 ubuntu#54.x.x.x
We kept each of these sessions open, as the JMeter server needs to be able to deliver the test results to the client.
Then we set up the JVM_ARGS environment variable on the client, in the jmeter.sh file in the /bin folder:
export JVM_ARGS="-Djava.rmi.server.hostname=localhost"
By doing this, JMeter will tell the servers to connect to localhost:60000 for delivering their results. This ends up being tunneled back to the client.
The SSH connections to the servers kept dropping after staying idle for a little bit. To prevent that from happening, we added a parameter to each of the SSH tunnel set up directing the client to wait 60 seconds before sending a null packet to the server to keep the connection alive:
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -o ServerAliveInterval=60 -R 60000:localhost:60000 ubuntu#54.x.x.x
(.ssh/config version of all required SSH settings:
Host 54.x.x.x
HostName 54.x.x.x
Port 22
User ubuntu
ServerAliveInterval 60
RemoteForward 127.0.0.1:60000 127.0.0.1:60000
IdentityFile ~/.ssh/54-x-x-x.us-east.pem
IdentitiesOnly yes
Just use ssh 54.x.x.x after setting this up.
)

I just went though this on openstack and found the same issues... no idea why the jmeter remoting documentation only covers half the required steps. You can do it without tunnels or touching the properties files.
You need
All nodes to advertise their public IP - on AWS/OS this defaults to the private IP
Ingress rules for the RMI port which defaults to 1099 - I use this
Ingress rules for the RMI "local" port which defaults to dynamic. Below I use 4001 for the client and 4000 for servers. The port can be the same but note the properties are different.
If you are using your workstation as the client you probably still need tunnels. Above Archana Aggarwal has good tips for tunnels.
Remote servers
Set java.rmi.server.hostname and server.rmi.localport inline or in the properties file.
jmeter-server -Djava.rmi.server.hostname=publicip -Dserver.rmi.localport=4000
Sneaky server on client
You can also run one on the same machine as the client. For clarity I've set java.rmi.server.hostname but left server.rmi.localport as dynamic
jmeter-server -Djava.rmi.server.hostname=localip
Client
Set java.rmi.server.hostname and client.rmi.localport inline or in the properties file. Use -R etc like so:
jmeter -n -t Test.jmx -Rremotepublicip1,remotepublicip2 -Djava.rmi.server.hostname=clientpublicip -Dclient.rmi.localport=4001 -GmypropA=1 -GmypropB=2 -lresults.jtl

When you go for distributed testing using JMeter in AWS, I would suggest you to use docker - which will help us with jmeter test infrastructure very quickly. This way we can also ensure that same version of java and jmeter are installed in all the instances of amazon which is very important of JMeter distributed testing.
Ensure that - you set below properties and ports are open for jmeter-server. [they do not have to be 1099,50000 exactly]
server.rmi.localport=50000
server_port=1099
java.rmi.server.hostname=SERVER_IP
for client
client.rmi.localport=60000
java.rmi.server.hostname=SERVER_IP - this step is very important as the container in aws instance will have their own IP address in the docker network - so master and slave can not communicate. So we explicitly set this property
More info:
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-in-aws/

Related

Running code server on gcp cloud shell gives error when previewing

I'm trying to run code-server on gcp cloud shell. I downloaded the following version
https://github.com/cdr/code-server/releases/download/v3.9.2/code-server-3.9.2-linux-amd64.tar.gz, which I think is the correct one, extracted the contents and ran
code-server --auth none
This gave the following output
[2021-04-06T00:53:21.728Z] info code-server 3.9.2 109d2ce3247869eaeab67aa7e5423503ec9eb859
[2021-04-06T00:53:21.730Z] info Using user-data-dir ~/.local/share/code-server
[2021-04-06T00:53:21.751Z] info Using config file ~/.config/code-server/config.yaml
[2021-04-06T00:53:21.751Z] info HTTP server listening on http://127.0.0.1:8080
[2021-04-06T00:53:21.751Z] info - Authentication is disabled
[2021-04-06T00:53:21.751Z] info - Not serving HTTPS
Now when I try Web Preview -> preview on port 8080 nothing happens I just get a blank screen and on the code console I see the following error
2021-04-06T00:50:04.470Z] error vscode Handshake timed out {"token":"e9b80ff7-10f9-4089-8497-b98688129452"}
I'm not sure what I need to do here ?
In cloud shell editor, create a file with .sh extension, and install the code-server by using these steps:
export VERSION=`curl -s https://api.github.com/repos/cdr/code-server/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")'`
wget https://github.com/cdr/code-server/releases/download/v3.10.2/code-server-3.10.2-linux-amd64.tar.gz
tar -xvzf code-server-3.10.2-linux-amd64.tar.gz
cd code-server-3.10.2-linux-amd64
To run the vscode.sh file using terminal:
./vscode.sh
If a warning “permission denied” comes, type chmod +x vscode.sh and then again proceed with
running the file.
To navigate to the folder:
cd code-server-3.10.2-linux-amd64/
To navigate to the bin:
cd bin/
To start the server :
./code-server --auth none --port 8080
Now you can see the VSCode IDE in your browser either by using web preview->preview on port 8080 option or the HTTP server link in your terminal.
My gut is saying that one must study this article (Expose code-server) in great detail. I think you will find that Code server is listening on IP address 127.0.0.1 at port 8080. Your thinking then is to access this server using Web Preview on port 8080 .... however ... pay attention to the IP addresses of your virtual machine. The IP address 127.0.0.1 is known as the loopback address. It is ONLY accessible to applications running on the SAME machine. My belief is that when you run Web Preview, you are trying to access the IP address of your Cloud Shell machine which is NOT 127.0.0.1.
If you read the above article, the story goes on to show how to use SSH forwarding to provide a front-end to whatever this application may be.

Cannot reach react application via dns hosted on ec2

I just want to see my development working on an EC2, showing to some friends, and think in deploying it after all of the work is done, but react doesn't cooperate. :/
I did everything I always do.
Started a ubuntu server on EC2
applied a group with 3000/tcp opened in my instance
Installed all dependencies of my app, npm 11.1 and its packages via npm install.
npm started it
and...
Nope.. there is no "and"... just my tears over a bunch of attempts without reaching 3000/tcp via public ip and dns..
I even tested ping on it.. set ICMP echo request and response rules, tested and it worked, but when I try to reach the application by 3000/tcp port, nothing.
Does someone have any idea?
As an image talk more than a thousand words, there it is... My nighmare
PS: a curl on localhost:3000 inside the ec2 works just fine.. while
another curl outside the ec2 returns Connection Refused
Looks like the application is bound to localhost (127.0.0.1). Update your start property to include --host 0.0.0.0
Refer: https://github.com/webpack/webpack-dev-server/issues/147

Unable to register AWS host to Ambari server

While registering a host to the cluster of Ambari-server, I am getting the following error.
"Host checks were skipped on 1 hosts that failed to register."
I'm trying to install HDP 2.5 version on the instance of AWS.
I have tried to follow the documentation of Hortonworks.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-installation/content/set_the_hostname.html
I have added public ip address and public hostname to /etc/hosts file and change the name of host in /etc/hostname file on the server and on the host. Rebooted both, hostname got changed. Then I have stop iptables by
sudo service iptables stop
After doing everything, the host registration is still failing. Kindly help. I am stuck.
Background
From my experience with Ambari (Hortonworks) you have to explicitly setup your Hadoop nodes in each other's /etc/hosts file with the actual name/IPs that the Hadoop services will bind to. NOTE: hostnames should also be FQDN - fully qualified domain names.
For example if you're setting up the hosts as:
node01.mydom.com (10.0.0.2)
node02.mydom.com (10.0.0.3)
node03.mydom.com (10.0.0.4)
These entries should be in all 3 server's /etc/hosts and these should be the names used when referencing them within Ambari's installation/setup wizards.
If you do not pay special attention to this detail, Ambari's server will fail to find/manage any of the other node's that you're telling it to manage.
hostname of ambari-agents
The other item to look at is that the ambari-agent's and what hostnames they think they're going as.
$ ps -eaf|grep ambari_agent
root 3282 1 0 Jul30 ? 00:00:00 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start --expected-hostname=node01.mydom.com
root 3290 3282 1 Jul30 ? 08:24:29 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/main.py start --expected-hostname=node01.mydom.com
Debugging further
In the screen where you're attempting to register the other nodes as agents, there's a full log of what's happening and you can typically get the commands from this area and attempt to run them directly. I've done this on a number of occasions. The commands will often be python ... commands which you can then copy/paste from the logs and run on the Ambari server where you're attempting to run the install.

SSH tunnelling to a remote server with django

I'm trying to set up an SSH tunnel to access my server (currently an ubuntu 16.04 VM on Azure) to set up safe access to my django applications running on it.
I was able to imitate the production environment with Apache WSGI and it works pretty good but since I'm trying to develop the application I don't want to make it available to broader public right now - but to make it visible only for a bunch of people.
To the point: when I set up the ssh tunnel using putty on Windows 10 (8000 to localhost:8000) and I run http://localhost:8000/ I get the folowing error:
"Not Found HTTP Error 404. The requested resource is not found.".
How can I make it work? I run the server using manage.py runserver 0:8000.
I found somewhere that the error may be due to the fact that the application does not have access to ssh files, but I don't know whether that's the point here (or how to change it).
Regards,
Dominik
After hours of trying I was able to solve the problem.
First of all, I made sure putty connects to the server and creates the desired tunnel. To do that I right-clicked on the putty window (title bar) and clicked event log. I checked the log and found the following error:
Local port 8000 forwarding to localhost:8000 failed: Network error:
Permission denied
I was able to solve it by choosing other local port (9000 instead of 8000 in my instance).
Second of all, I edited the sshd_config file: sudo vi etc/ssh/sshd_config
and added these three lines:
AllowAgentForwarding yes
AllowTcpForwarding yes
GatewayPorts yes
I saved the file and restarted the ssh service:
sudo service ssh stop
sudo service ssh start
Now when I visit localhost:9000 everything works just fine.

Installing and Viewing Neo4j on Existing AWS EC2 Instance

I'm trying to install the enterprise edition of neo4j on an existing EC2 (Amazon linux) instance. So far I've
wget "link to enterprise"
untar the file
renamed and moved the folder to NEO4J_HOME
then went into the config files for neo4j.properties to make the following changes:
# Enable shell server so that remote clients can connect via Neo4j shell.
remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces)
remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337
remote_shell_port=1337
EDITED Christophe Willemsen pointed out that for my original error, I had forgotten to restart the server at that point but I was still unable to access the web server while it was running. So to make it more clear, I've edited the remaining post:
I went to neo4j-server.properties and uncommented:
org.neo4j.server.webserver.address=0.0.0.0
And start the server
NEO4J_HOME/bin/neo4j start
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow
Starting Neo4j Server...WARNING: not changing user
process [28557]... waiting for server to be ready..... OK.
http://localhost:7474/ is ready.
checking the status:
NEO4J_HOME/bin/neo4j status
Neo4j Server is running at pid 28557
I can run the shell but the when I go to localhost 7474 I still can not connect
Any help would be appreciative. The only tutorial or help I've found assumed I was starting from scratch with a new instance. If someone could provide some instructions for installing or fix my configuration that would be great.
Thanks!
You have to edit neo4j-server.properties and uncomment the line with:
org.neo4j.server.webserver.address=0.0.0.0
So that the db listens on an external interface not just localhost, and you have to open the port (7474) in your firewall rules.
Make sure to secure access to the db though:
http://neo4j.com/docs/stable/security-server.html