I'm doing distributed testing using JMeter and I have a remote server in AWS. Using Jmeter 2.12, I have my setup similar to the instructions: Setting up JMeter for Distributed testing in AWS with connectivity issues
However, I've been unable to record the JTL files for the tests being run.
When I use
sh jmeter.sh -n -t /jmeter/apache-jmeter-2.12/tests/ABC.jmx -l results.jtl -r
the test will execute but the results.jtl file is not being created when connecting with my remote server
If I remove the -r flag (thus not doing distributed testing), the log file will generate;
How do I generate the .JTL file after testing using the remote server?
Related
Google Cloud Run allows for using Cloud SQL. But what if you need Cloud SQL when building your container in Google Cloud Build? Is that possible?
Background
I have a Next.js project, that runs in a Container on Google Cloud Run. Pushing my code to Cloud Build (installing the stuff, generating static pages and putting everything in a Container) and deploying to Cloud Run works perfectly. đź‘Ś
Cloud SQL
But, I just added some functionality in which it also needs to some data from my PostgreSQL instance that runs on Google Cloud SQL. This data is used when building the project (generating the static pages).
Locally, on my machine, this works fine as the project can connect to my CloudSQL proxy. While running in CloudRun this should also work, as Cloud Run allows for connecting to my Postgres instance on Cloud SQL.
My problem
When building my project with Cloud Build, I need access to my database to be able to generate my static pages. I am looking for a way to connect my Docker cloud builder to Cloud SQL, perhaps just like Cloud Run (fully managed) provides a mechanism that connects using the Cloud SQL Proxy.
That way I could be connecting to /cloudsql/INSTANCE_CONNECTION_NAME while building my project!
Question
So my question is: How do I connect to my PostgreSQL instance on Google Cloud SQL via the Cloud SQL Proxy while building my project on Google Cloud Build?
Things like my database credentials, etc. already live in Secrets Manager, so I should be able to use those details I guess 🤔
You can use the container that you want (and you need) to generate your static pages, and download cloud sql proxy to open a tunnel with the database
- name: '<YOUR CONTAINER>'
entrypoint: 'sh'
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=<my-project-id:us-central1:myPostgresInstance>=tcp:5432 &
<YOUR SCRIPT>
App engine has an exec wrapper which has the benefit of proxying your Cloud SQL in for you, so I use that to connect to the DB in cloud build (so do some google tutorials).
However, be warned of trouble ahead: Cloud Build runs exclusively* in us-central1 which means it'll be pathologically slow to connect from anywhere else. For one or two operations, I don't care but if you're running a whole suite of integration tests that simply will not work.
Also, you'll need to grant permission for GCB to access GCSQL.
steps:
- id: 'Connect to DB using appengine wrapper to help'
name: gcr.io/google-appengine/exec-wrapper
args:
[
'-i', # The image you want to connect to the db from
'$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME:$SHORT_SHA',
'-s', # The postgres instance
'${PROJECT_ID}:${_POSTGRES_REGION}:${_POSTGRES_INSTANCE_NAME}',
'-e', # Get your secrets here...
'GCLOUD_ENV_SECRET_NAME=${_GCLOUD_ENV_SECRET_NAME}',
'--', # And then the command you want to run, in my case a database migration
'python',
'manage.py',
'migrate',
]
substitutions:
_GCLOUD_ENV_SECRET_NAME: mysecret
_GCR_HOSTNAME: eu.gcr.io
_POSTGRES_INSTANCE_NAME: my-instance
_POSTGRES_REGION: europe-west1
* unless you're willing to pay more and get very stung by Beta software, in which case you can use cloud build workers (at the time of writing are in Beta, anyway... I'll come back and update if they make it into production and fix the issues)
The ENV VARS (including DB connections) are not available during build steps.
However, you can use ENTRYPOINT (of Docker) to run commands when the container runs (after completing the build steps).
I was having the need to run DB migrations when a new build was deployed (i.e. when the container starts running) and using ENTRYPOINT (to a file/command) was able to run migrations (which require DB connection details, not available during the build-process).
"How to" part is pretty brief and is located here : https://stackoverflow.com/a/69088911/867451
I'm using ubuntu 12.04 AMI in EC2 for creating puppet cluster and i'm facing problems while configuring it.
The problem is that the master is not able to recognize the slaves.
Do i need more packages other than mysql
/etc/mysql/my.cnf
what changes do i need in the above file?
Puppet is a configuration management tool that allows automating the process of defining and maintaining consistent state of several developer workstations. It is a descriptive, centralized and client-server based system. The central server is configured and the clients synchronize themselves to it to ensure that all systems end in the described state. For instance, the task of ensuring the same development environment on all developer systems in a project can be easily accomplished using Puppet.
Here is a quick procedure to set up a Puppet server and one Puppet client on Amazon EC2 instance having Ubuntu OS, and also installing Puppet Dashboard on server to view the status of the clients.
Prerequisites
Two ec2 instances set up with Ubuntu ami.
One instance named as puppetserver and other as puppetclient.
Procedure
Puppet server and client set up
Configuring hosts files View the /etc/hostname file on puppetserver and puppetclient. These are the Puppet server and client hostnames respectively
Edit /etc/hosts file on both the systems. Add server and client IPs and corresponding hostnames.
Setting up the Puppet Server
Enabling the Puppet Labs Package repository
Download the "puppetlabs-release" package for the OS (here, Ubuntu 12.04) on Puppet server
Install the package by running
dpkg -i
Run apt-get update to get new list of available packages.
For example, to enable the repository for Ubuntu 12.04, Precise Pangolin:
wget https://apt.puppetlabs.com/puppetlabs-release-precise.deb
sudo dpkg -i puppetlabs-release-precise.deb
sudo apt-get updateInstall Puppet
Install Puppet
Install puppetmaster
sudo apt-get update sudo apt-get install puppetmaster
Setting up the Puppet Client
Install Puppet on the puppet client(s)
sudo apt-get update sudo apt-get install puppet
Specify the Puppet server domain name on the client. To do this, modify the
/etc/puppet/puppet.conf
file and add the line
server=.
The client can now connect to the Puppet master.
Start the Puppet agent service for establishing first communication between server and client.
sudo puppet agent --verbose --no-daemonize --onetime
This starts a connection to the Puppet master process that is listening on port 8140 on the Puppet server. The output will be verbose, and the agent will not continue running in the background as a daemon. Also, it will run only one time, that is, after the connection is closed, the agent process will exit. The output looks like:
The client has made itself known to the server by sending an SSL certificate request. The server needs to certify the client.
To view the list of yet-to-be signed certificates on the server
sudo puppet cert --list
This lists the following
Sign the client node's SSL certificate
sudo puppet cert --sign <puppet client name>
Client can now establish full connection to the server and poll the Puppet master for any configuration updations.
Defining Configurations
We have set up puppet on both Puppet server and client and have also established communication between the two machines. Next step is to define the configuration for the target systems using puppet manifest. These manifests are specified in site.pp file.
As an example, we define a manifest that will create a helloworld.txt file on the client.
Defining manifest
Put the following manifest definition in /etc/puppet/manifests/site.pp file,
node "<puppet client hostname>" { file { "/home/ubuntu/helloworld.txt": content => "This is test content", ensure => file, owner => "ubuntu", group => "ubuntu", mode => 0644 } }
This manifest defines that the puppet client must have a helloworld.txt file
in /home/ubuntu/ folder with content, This is test content.
Getting changes on client
On puppet client, run the following command.
sudo puppet agent -t
The puppet client pulls the manifests defined in the site.pp file on the puppet server. It learned that a file named helloworld.txt with defined specifications, is expected to exist at location /home/ubuntu. Since, no such file exists on the client, the agent takes action and creates the file.
View the 'helloworld.txt' file
To verify that the client exists in a state defined by the Puppet server, run the following command
sudo vi /home/ubuntu/helloworld.txt
The file contents are same as defined in the manifest definition on the server.
Installing Puppet Dashboard
Overview
Puppet Dashboard is a GUI that interfaces with Puppet. It can be used to view and report the status of all the client nodes. Puppet dashboard runs on port 3000 on the puppet server.
Following are the steps for set up
Installing external dependencies
Dashboard is a Ruby on Rails web app and thus requires certain software to be installed
RubyGems
Rake version 0.8.3 or newer
MySQL database server version 5.x
Ruby-MySQL bindings version 2.7.x or 2.8.x
Install the packages
sudo apt-get install -y build-essential irb libmysql-ruby libmysqlclient-dev libopenssl-ruby libreadline-ruby mysql-server rake rdoc ri ruby ruby-dev
Install RubyGems package system
( URL="http://production.cf.rubygems.org/rubygems/rubygems-1.3.7.tgz" PACKAGE=$(echo $URL | sed "s/\.[^\.]*$//; s/^.*\///") cd $(mktemp -d /tmp/install_rubygems.XXXXXXXXXX) && \ wget -c -t10 -T20 -q $URL && \ tar xfz $PACKAGE.tgz && \ cd $PACKAGE && \ sudo ruby setup.rb )
Create gem as an alternative name for gem1.8
sudo update-alternatives --install /usr/bin/gem gem /usr/bin/gem1.8 1
Installing Puppet Dashboard
Install puppet-dashboard from puppetlabs package repository
sudo apt-get update sudo apt-get install puppet-dashboard
Configuring Dashboard
Modify the database.yml file. It can be found at /usr/share/puppet-dashboard/config/database.yml.
Under the key-value pairs for production environment, the database value 'dashboard_production' specifies the dashboard database name, and username value 'dashboard' specifies the user for this database. In the next step, we will create both the database and the user. password value is the password for MySQL.
Creating and Configuring MySQL database
Create the user and database for puppet-dashboard. Navigate to MySQL command line
CREATE DATABASE dashboard_production CHARACTER SET utf8; CREATE USER 'dashboard'#'localhost' IDENTIFIED BY 'my_password'; GRANT ALL PRIVILEGES ON dashboard_production.* TO 'dashboard'#'localhost';
Configure MySQL's maximum packet size to permit larger rows in database
set global max_allowed_packet = 33554432;
Also modify the mysql configuration file /etc/mysql/my.cnf
Allowing 32MB allows an occasional 17MB row with plenty of spare room
max_allowed_packet = 32M
To create dashboard tables, run the following command in the puppet-dashboard folder
cd /usr/share/puppet-dashboard rake RAILS_ENV=production db:migrate
Testing that Dashboard is working
Start the dashboard using Ruby’s built-in WEBrick server
cd /usr/share/puppet-dashboard
sudo ./script/server -e production
Dashboard instance starts on port 3000 using the “production” environment. Dashboard’s UI can be viewed at :3000
Configure puppet
Both the puppet server and client need to be configured for the dashboard to receive reports.
Configure agent nodes to submit reports to master by turning their reporting ON.
puppet.conf (on each agent)
[agent]
report = true
Configure the server. Add the http report handler to puppet server's reports setting and set reporturl to Dashboard instance’s reports/upload URL
puppet.conf (on puppet master)
[master]
reports = store, http
reporturl = http://<server hostname>:3000/reports/upload
For enabling dashboard's external node classifier(ENC),
puppet.conf (on puppet master)
[master]
node_terminus = exec
external_nodes = /usr/bin/env PUPPET_DASHBOARD_URL=http://<server hostname>:3000 /usr/share/puppet-dashboard/bin/external_node
Testing Puppet's connection to Dashboard
Restart the puppet master
Run one of the puppet agents to test the configurations
sudo puppet agent -t
The output will be:
This means that the report has arrived. To process it, we will activate the delayed_job workers.
Starting delayed_job workers
Run the following command
cd /usr/share/puppet-dashboard
sudo env RAILS_ENV=production script/delayed_job -p dashboard -n 1 -m start
This starts the delayed_job workers, and completes the pending task.
Thus, puppet is now installed on two EC2 instances, out of which one is server and the other is client. Also, puppet-dashboard is installed to view the status of the client nodes.
I am trying to upload a file to a remote server using the SCP task. I have OpenSSH configured on the remote server in question, and I am using an Amazon EC2 instance running Windows Server 2008 R2 with Cygwin to run the Bamboo build server.
My question is regarding finding the directory I wish to use. I want to upload the entire contents of C:\doc using SCP. The documentation notes that I must use the local path relative to the Bamboo working directory rather than an absolute directory name.
I found by running pwd during the build plan that the working directory is /cygdrive/c/build-dir/CDP-DOC-JOB1. So to get to doc, I can run cd ../../doc. However, when I set my working directory under the SCP configuration as ../../doc/** (using this pattern matching guide), I get the message There were no files to upload. in the log.
C:\doc contains subfolders as well as a textfile in the root directory.
Here is my SCP task configuration:
Here is a look from cygwin at my directory:
You may add a first "script" task running a Windows shell, that copies everything from C:\doc to some local directory, and then run the scp task to copy the content of this new directory onto your remote server
mkdir doc
xcopy c:\doc .\doc /E /F
Then the pattern for copy should be /doc/**
As we all know by now, the only way to run tests on iOS is by using the simulator. My problem is that we are running jenkins and the iOS builds are running on a slave (via SSH), as a result running xcodebuild can't start the simulator (as it runs headless). I've read somewhere that it should be possible to get this to work with SimLauncher (gem sim_launcher). But I can't find any info on how to set this up with xcodebuild. Any pointers are welcome.
Headless and xcodebuild do not mix well. Please consider this alternative:
You can configure the slave node to launch via jnlp (webstart). I use a bash script with the .command extension as a login item (System Preferences -> Users -> Login Items) with the following contents:
#!/bin/bash
slave_url="https://gardner.company.com/jenkins/jnlpJars/slave.jar"
max_attempts=40 # ten minutes
echo "Waiting to try again. curl returneed $rc"
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -ne 0 -a $max_attempts -gt 0 ]; then
echo "Waiting to try again. curl returneed $rc"
sleep 5
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -eq 0 ]; then
zip -T slave.jar
rc=$?
fi
let max_attempts-=1
fi
# Simulator
java -jar slave.jar -jnlpUrl https://gardner.company.com/jenkins/computer/buildmachine/slave-agent.jnlp -secret YOUR_SECRET_KEY
The build user is set to automatically login. You can see the arguments to the slave.jar app by executing:
gardner:~ buildmachine$ java -jar slave.jar --help
"--help" is not a valid option
java -jar slave.jar [options...]
-auth user:pass : If your Jenkins is security-enabled, specify
a valid user name and password.
-connectTo HOST:PORT : make a TCP connection to the given host and
port, then start communication.
-cp (-classpath) PATH : add the given classpath elements to the
system classloader.
-jar-cache DIR : Cache directory that stores jar files sent
from the master
-jnlpCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP requests.
-jnlpUrl URL : instead of talking to the master via
stdin/stdout, emulate a JNLP client by
making a TCP connection to the master.
Connection parameters are obtained by
parsing the JNLP file.
-noReconnect : Doesn't try to reconnect when a communication
fail, and exit instead
-proxyCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP authenticated proxy requests.
-secret HEX_SECRET : Slave connection secret to use instead of
-jnlpCredentials.
-slaveLog FILE : create local slave error log
-tcp FILE : instead of talking to the master via
stdin/stdout, listens to a random local
port, write that port number to the given
file, then wait for the master to connect to
that port.
-text : encode communication with the master with
base64. Useful for running slave over 8-bit
unsafe protocol like telnet
gardner:~ buildmachine$
For a discussion about OSX slaves and how the master is launched please see this Jenkins bug: https://issues.jenkins-ci.org/browse/JENKINS-21237
Erik - I ended up doing the items documented here:
Essentially:
The first problem, is that you do have to have the user that runs the builds also logged in to the console on that Mac build machine. It needs to be able to pop up the simulator, and will fail if you don’t have a user logged in — as it can’t do this entirely headless without a display.
Secondly, the XCode Developer tools requires elevated privileges in order to execute all of the tasks on the Unit tests. Sometimes you may miss seeing it, but without these, the Simulator will give you an authentication prompt that never clears.
A first solution to this (on Mavericks) is to run:
sudo security authorizationdb write system.privilege.taskport allow
This will eliminate one class of these authentication popups. You’ll also need to run:
sudo DevToolsSecurity --enable
Per Apple’s man page on this tool:
On normal user systems, the first time in a given login session that
any such Apple-code-signed debugger or performance analysis tools are
used to examine one of the user’s processes, the user is queried for
an administator password for authorization. DevToolsSecurity tool to
change the authorization policies, such that a user who is a member of
either the admin group or the _developer group does not need to enter
an additional password to use the Apple-code-signed debugger or
performance analysis tools.
Only issue is that these same things seem to be broken once I upgraded to Xcode 6. Back to the drawing board....
I am using JMeter's distributed testing feature which works fine. However, when I schedule this distributed run, it just runs immediately and disregards schedules. It happens only for distributed testing. Any idea?
I haven't come across that little gem. Have you tried starting from the command line to see if it works properly?
Step 3b: Start the JMeter from a non-GUI Client
As an alternative, you can start the remote server(s) from a non-GUI (command-line) client. The command to do this is:
jmeter -n -t script.jmx -r
or
jmeter -n -t script.jmx -R server1,server2...
Other flags that may be useful:
-Gproperty=value - define a property in all the servers (may appear more than once)
-Z - Exit remote servers at the end of the test.
The first example will start whatever servers are defined in the JMeter property remote_hosts; the second example will define remote_hosts from the list of servers and then run the remote servers.
The command-line client will exit when all the remote servers have stopped.