I have question can we call the variables from yaml(.yml) file into JMeter ? As I am facing an issue.
The reason to use YAML(.yml) file instead of .properties file is that our script is running fine in local when we pass the variables i.e Numbers of user , in config.properties file but when we run in AWS distribution testing we are getting an issue . So I am trying with YML . As someone suggested but how to call the variables in JMeter of yml . We are facing an challenge . Any Idea or any other approach.
I fail to see how YAML file will help there.
If you have config.properties file you can "feed" it to the JMeter Slaves as follows:
Either start JMeter slave process providing the path to the config.properties file using -q command-line argument
Or pass the config.properties file from the master machine to the slaves using -G command-line argument
More information:
Full list of command-line options
How to Perform Distributed Testing in JMeter
If by "AWS Distribution Testing" you mean Distributed Load Testing on AWS then check out JMeter Properties and Variables article of Taurus documentation
Related
With Kubernetes, I used to mount a file containing feature-flags as key/value pairs. Our UI would then simply get the file and read the values.
Like this: What's the best way to share/mount one file into a pod?
Now I want to do the same with the manifest file for CloudFoundry. How can I mount a file so that it will be available in /dist folder at deployment time?
To add more information, when we mount a file, the UI later can download the file and read the content. We are using React and any call to the server has to go through Apigee layer.
The typical approach to mounting files into a CloudFoundry application is called Volume Services. This takes a remote file system like NFS or SMB and mounts it into your application container.
I don't think that's what you want here. It would probably be overkill to mount in a single file. You totally could go this route though.
That said, CloudFoundry does not have a built-in concept that's similar to Kubernetes, where you can take your configuration and mount it as a file. With CloudFoundry, you do have a few similar options. They are not exactly the same though so you'll have to make the determination if one will work for your needs.
You can pass config through environment variables (or through user-provided service bindings, but that comes through an environment variable VCAP_SERVICES as well). This won't be a file, but perhaps you can have your UI read that instead (You didn't mention how the UI gets that file, so I can't comment further. If you elaborate on that point like if it's HTTP or reading from disk, I could perhaps expand on this option).
If it absolutely needs to be a file, your application could read the environment variable contents and write it to disk when it starts. If your application isn't able to do that like if you're using Nginx, you could include a .profile script at the root of your application that reads it and generates the file. For example: echo "$CFG_VAR" > /dist/file or whatever you need to do to generate that file.
A couple of more notes when using environment variables. There are limits to how much information can go in them (sorry I don't know the exact value off the top of my head, but I think it's around 128K). It is also not great for binary configuration, in which case, you'd need to base64 encode your data first.
You can pull the config file from a config server and cache it locally. This can be pretty simple. The first thing your app does when it starts is to reach out and download the file, place it on the disk and the file will persist there for the duration of your application's lifetime.
If you don't have a server-side application like if you're running Nginx, you can include a .profile script (can be any executable script) at the root of your application which can use curl or another tool to download and set up that configuration.
You can replace "config server" with an HTTP server, Git repository, Vault server, CredHub, database, or really any place you can durably store your data.
Not recommended, but you can also push your configuration file with the application. This would be as simple as including it in the directory or archive that you push. This has the obvious downside of coupling your configuration to the application bits that you push. Depending on where you work, the policies you have to follow, and the tools you use this may or may not matter.
There might be other variations you could use as well. Loading the file in your application when it starts or through a .profile script is very flexible.
I have some big .avro files in the Google Cloud Storage and I want to concat all of them in a single file.
I got
java -jar avro-tools.jar concat
However, as my files are in the google storage path: gs://files.avro I can't concat them by using avro-tools. Any suggestion about how to solve it?
You can use the gsutil compose command. For example:
gsutil compose gs://bucket/obj1 [gs://bucket/obj2 ...] gs://bucket/composite
Note: For extremely large files and/or very low per-machine bandwidth, you may want to split the file and upload it from multiple machines, and later compose these parts of the file manually.
On my case I tested it with the following values: foo.txt contains a word Hello and bar.txt contains a word World. Running this command:
gsutil compose gs://bucket/foo.txt gs://bucket/bar.txt gs://bucket/baz.txt
baz.txt would return:
Hello
World
Note: GCS does not support inter-bucket composing.
Just in case if you're encountering an exception error with regards to integrity checks, run gsutil help crcmod to get an instructions on how to fix it.
Check out https://github.com/spotify/gcs-tools
Light weight wrapper that adds Google Cloud Storage (GCS) support to common Hadoop tools, including avro-tools, parquet-cli, proto-tools for Scio's Protobuf in Avro file, and magnolify-tools for Magnolify code generation, so that they can be used from regular workstations or laptops, outside of a Google Compute Engine (GCE) instance.
I'm trying to create DCOS services that download artifacts(custom config files etc.) from hdfs. I was using simple ftp server before for it but I wanted to use hdfs. It is allowed to use "hdfs://" in artifact uri but it doesn't work correctly.
Artifact fetch ends with error because there's no "hadoop" command. Weird. I read that I need to provide own hadoop for it.
So I downloaded hadoop, set up necessary variables in /etc/profile. I can run "hadoop" without any problem when ssh'ing to node but service still ends with the same error.
It seems that environment variables configured in service are used after the artifact fetch because they don't work at all. Also, it looks like services completely ignore /etc/profile file.
So my question is: how do I set up everything so my service can fetch artifacts stored on hdfs?
The Mesos fetcher supports local Hadoop clients, please check your agent configuration and in particular your --hadoop_home setting.
My JMeter test should upload a file to a web service.
The upload request contains the Username and the filename in the Header.
The Username and the filename are listed in a CSV file.
On my local machine the jmx file, the csv file and all test data are in the same directory.
The test works great here.
However, if I start the test remote, the remote machine uses the correct username and filename but won't find the file because it's obviously not in JMeter's BaseDir.
Is there a best practice to send the test data to the remote server or do I have to manually put them there in the right directory every time?
JMeter sends only .jmx test plan in form of HashTree to the remote machines. It is also possible to pass some JMeter properties to remote engines via -G command line argument
Apart from this, remote JMeter engines are absolutely independent, if your test explicitly relies on an external file (.csv file used in the CSV Data Set Config or other file, which will be used for uploading)- you need to copy this file to all remote engines (manually or in automated manner), JMeter isn't smart enough to do this for you.
I am new to jmeter. I have written a .jmx script containing few http samplers for performance testing and can locally run it and see the results in the summary report.
Question is how can I achieve the same on a AWS ec2 instance.
You can run your script on EC2 instance absolutely the same manner as in local environment. However if you're talking about completely new instance without anything installed consider the following checklist:
Java 6 SDK by Oracle, 64-bit installed, JAVA_HOME environment variable set, /bin folder of Java SDK installation is in PATH variable.
JMeter downloaded and unpacked. Relevant HEAP and other JVM_ARGS overrides are made in JMeter startup scripts.
Any CSV data files, configuration files, plugins, extensions are transferred to relevant locations so JMeter could find and use them
Perform sample run with 1 user and 1 iteration to see whether everything is good. Inspect jmeter.log file for any warnings and errors.
Set number of virtual users and iterations according to your load scenario.
Make sure that performance checklist from JMeter Performance and Tuning Tips is passing for your environment
Run, collect metrics, analyze results, raise issues, etc.