scheduling Jmeter distributed testing - unit-testing

I am using JMeter's distributed testing feature which works fine. However, when I schedule this distributed run, it just runs immediately and disregards schedules. It happens only for distributed testing. Any idea?

I haven't come across that little gem. Have you tried starting from the command line to see if it works properly?
Step 3b: Start the JMeter from a non-GUI Client
As an alternative, you can start the remote server(s) from a non-GUI (command-line) client. The command to do this is:
jmeter -n -t script.jmx -r
or
jmeter -n -t script.jmx -R server1,server2...
Other flags that may be useful:
-Gproperty=value - define a property in all the servers (may appear more than once)
-Z - Exit remote servers at the end of the test.
The first example will start whatever servers are defined in the JMeter property remote_hosts; the second example will define remote_hosts from the list of servers and then run the remote servers.
The command-line client will exit when all the remote servers have stopped.

Related

What is a quick and simple way to know if Docker containers are running on an EC2 instance?

I have a few Docker containers running on EC2 instances in AWS. In the past I have had situations where the Docker containers simply exit due to errors on the docker daemon, and they never start up even though the restart policies are in place (daemon is not running so I don't expect them to get up of course).
Since I am going on holiday I want to implement a quick and easy solution that would allow me to be notified if any containers have exited unexpectedly. The only quick solution I could find was using an Amazon Event Bridge rule for running a scheduled task every X minutes and executing a Systems Manager RunDockerAction command (docker ps) on the instances, but this does not give me any output except for the fact that the command has successfully executed on the instance.
Is there any way that I can get the output of such an Event Bridge task to send the results over an SNS topic if things go wrong?
IF you are running Linux on your AWS EC2 instance, then one solution is to use e-mail as a notification system. In that case, I would suggest the following:
On the AWS EC2 instance, create a Bash script that runs docker ps -a and combine that with a grep statement to filter on the docker container IDs that you want to monitor.
In the same Bash script, using echo and mail, you can e-mail yourself with statistics seen in the previous step. For example"
echo "${container} is not running" | mail -s "Alert! Docker container ${container} is not running!" "first.last#domain.com"
(The above relies on $container to be set appropriately. Use grep to filter out data of interest.)
Create a system crontab job (etc/crontab) and schedule the Bash script to run at your wanted interval.
This is only one possible solution, one that I use myself for quick checks at times.

Running Django checks in production runtime

I have a Django app that is deployed on kubernetes. The container also has a mount to a persistent volume containing some files that are needed for operation. I want to have a check that will check that the files are there and accessible during runtime everytime a pod starts. The Django documentation recommends against running checks in production (the app runs in uwsgi), and because the files are only available in the production environment, the check will fail when unit tested.
What would be an acceptable process for executing the checks in production?
This is a community wiki answer posted for better visibility. Feel free to expand it.
Your use case can be addressed from Kubernetes perspective. All you have to do is to use the Startup probes:
The kubelet uses startup probes to know when a container application
has started. If such a probe is configured, it disables liveness and
readiness checks until it succeeds, making sure those probes don't
interfere with the application startup. This can be used to adopt
liveness checks on slow starting containers, avoiding them getting
killed by the kubelet before they are up and running.
With it you can use the ExecAction that would execute a specified command inside the container. The diagnostic would be considered successful if the command exits with a status code of 0. An example of a simple command check could be one that checks if a particular file exists:
exec:
command:
- stat
- /file_directory/file_name.txt
You could also use a shell script but remember that:
Command is the command line to execute inside the container, the
working directory for the command is root ('/') in the container's
filesystem. The command is simply exec'd, it is not run inside a
shell, so traditional shell instructions ('|', etc) won't work. To use
a shell, you need to explicitly call out to that shell.

How to use Regular Expression Extractor when running JMeter in Non-GUI Mode

I am trying to run HTTPs request in JMeter. Using GUI Mode i ran 5000 requests and got responses for them in JSON format.
I want to read a particular field in JSON called "responseCode". For this we need to use Regular expressions. BUt i want to know how to use regular expression in non-GUI Mode.
I assume you know how to run test in Non-UI mode. If you don't then
jmeter -n -t my_test.jmx -l log.jtl -H <my.proxy.server> -P <8000> -u <username> -p <password>
Answer to your question is,
You can create a script in UI mode.
Add all regexes you want to add in your script.
Verify it once if they are working or not with small no. of users/threads.
Once you are sure that it is working then directly run that test from Non-UI mode. (without modifying it)
Regexes which are present in script will work in Non-UI mode also. JMeter components are independent of UI. They are aligned with script. Add all components that you want in script while record/replay time, perform correlation, parametrization and then schedule script in Non-ui mode.
It should run smoothly as it was running in UI mode.
Once you have developed the script in GUI you can use the same script in non-GUI mode too.
jmeter -n -t my_test.jmx -l log.jtl -H my.proxy.server -P 8000
In case you are looking for a component with which you can extract the responseCode then you can use the Regular Expression Extractor under Post processors
http://jmeter.apache.org/usermanual/component_reference.html#Regular_Expression_Extractor
for extracting it

Distributed Testing using Jmeter; Retrieving logs

I'm doing distributed testing using JMeter and I have a remote server in AWS. Using Jmeter 2.12, I have my setup similar to the instructions: Setting up JMeter for Distributed testing in AWS with connectivity issues
However, I've been unable to record the JTL files for the tests being run.
When I use
sh jmeter.sh -n -t /jmeter/apache-jmeter-2.12/tests/ABC.jmx -l results.jtl -r
the test will execute but the results.jtl file is not being created when connecting with my remote server
If I remove the -r flag (thus not doing distributed testing), the log file will generate;
How do I generate the .JTL file after testing using the remote server?

xcodebuild running tests headless?

As we all know by now, the only way to run tests on iOS is by using the simulator. My problem is that we are running jenkins and the iOS builds are running on a slave (via SSH), as a result running xcodebuild can't start the simulator (as it runs headless). I've read somewhere that it should be possible to get this to work with SimLauncher (gem sim_launcher). But I can't find any info on how to set this up with xcodebuild. Any pointers are welcome.
Headless and xcodebuild do not mix well. Please consider this alternative:
You can configure the slave node to launch via jnlp (webstart). I use a bash script with the .command extension as a login item (System Preferences -> Users -> Login Items) with the following contents:
#!/bin/bash
slave_url="https://gardner.company.com/jenkins/jnlpJars/slave.jar"
max_attempts=40 # ten minutes
echo "Waiting to try again. curl returneed $rc"
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -ne 0 -a $max_attempts -gt 0 ]; then
echo "Waiting to try again. curl returneed $rc"
sleep 5
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -eq 0 ]; then
zip -T slave.jar
rc=$?
fi
let max_attempts-=1
fi
# Simulator
java -jar slave.jar -jnlpUrl https://gardner.company.com/jenkins/computer/buildmachine/slave-agent.jnlp -secret YOUR_SECRET_KEY
The build user is set to automatically login. You can see the arguments to the slave.jar app by executing:
gardner:~ buildmachine$ java -jar slave.jar --help
"--help" is not a valid option
java -jar slave.jar [options...]
-auth user:pass : If your Jenkins is security-enabled, specify
a valid user name and password.
-connectTo HOST:PORT : make a TCP connection to the given host and
port, then start communication.
-cp (-classpath) PATH : add the given classpath elements to the
system classloader.
-jar-cache DIR : Cache directory that stores jar files sent
from the master
-jnlpCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP requests.
-jnlpUrl URL : instead of talking to the master via
stdin/stdout, emulate a JNLP client by
making a TCP connection to the master.
Connection parameters are obtained by
parsing the JNLP file.
-noReconnect : Doesn't try to reconnect when a communication
fail, and exit instead
-proxyCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP authenticated proxy requests.
-secret HEX_SECRET : Slave connection secret to use instead of
-jnlpCredentials.
-slaveLog FILE : create local slave error log
-tcp FILE : instead of talking to the master via
stdin/stdout, listens to a random local
port, write that port number to the given
file, then wait for the master to connect to
that port.
-text : encode communication with the master with
base64. Useful for running slave over 8-bit
unsafe protocol like telnet
gardner:~ buildmachine$
For a discussion about OSX slaves and how the master is launched please see this Jenkins bug: https://issues.jenkins-ci.org/browse/JENKINS-21237
Erik - I ended up doing the items documented here:
Essentially:
The first problem, is that you do have to have the user that runs the builds also logged in to the console on that Mac build machine. It needs to be able to pop up the simulator, and will fail if you don’t have a user logged in — as it can’t do this entirely headless without a display.
Secondly, the XCode Developer tools requires elevated privileges in order to execute all of the tasks on the Unit tests. Sometimes you may miss seeing it, but without these, the Simulator will give you an authentication prompt that never clears.
A first solution to this (on Mavericks) is to run:
sudo security authorizationdb write system.privilege.taskport allow
This will eliminate one class of these authentication popups. You’ll also need to run:
sudo DevToolsSecurity --enable
Per Apple’s man page on this tool:
On normal user systems, the first time in a given login session that
any such Apple-code-signed debugger or performance analysis tools are
used to examine one of the user’s processes, the user is queried for
an administator password for authorization. DevToolsSecurity tool to
change the authorization policies, such that a user who is a member of
either the admin group or the _developer group does not need to enter
an additional password to use the Apple-code-signed debugger or
performance analysis tools.
Only issue is that these same things seem to be broken once I upgraded to Xcode 6. Back to the drawing board....