JMeter: remote test with file upload - web-services

My JMeter test should upload a file to a web service.
The upload request contains the Username and the filename in the Header.
The Username and the filename are listed in a CSV file.
On my local machine the jmx file, the csv file and all test data are in the same directory.
The test works great here.
However, if I start the test remote, the remote machine uses the correct username and filename but won't find the file because it's obviously not in JMeter's BaseDir.
Is there a best practice to send the test data to the remote server or do I have to manually put them there in the right directory every time?

JMeter sends only .jmx test plan in form of HashTree to the remote machines. It is also possible to pass some JMeter properties to remote engines via -G command line argument
Apart from this, remote JMeter engines are absolutely independent, if your test explicitly relies on an external file (.csv file used in the CSV Data Set Config or other file, which will be used for uploading)- you need to copy this file to all remote engines (manually or in automated manner), JMeter isn't smart enough to do this for you.

Related

JMeter with Yaml file (AWS Distribution Testing)

I have question can we call the variables from yaml(.yml) file into JMeter ? As I am facing an issue.
The reason to use YAML(.yml) file instead of .properties file is that our script is running fine in local when we pass the variables i.e Numbers of user , in config.properties file but when we run in AWS distribution testing we are getting an issue . So I am trying with YML . As someone suggested but how to call the variables in JMeter of yml . We are facing an challenge . Any Idea or any other approach.
I fail to see how YAML file will help there.
If you have config.properties file you can "feed" it to the JMeter Slaves as follows:
Either start JMeter slave process providing the path to the config.properties file using -q command-line argument
Or pass the config.properties file from the master machine to the slaves using -G command-line argument
More information:
Full list of command-line options
How to Perform Distributed Testing in JMeter
If by "AWS Distribution Testing" you mean Distributed Load Testing on AWS then check out JMeter Properties and Variables article of Taurus documentation

Variable, relative resource path for binary request body in Postman/Newman

Context
I am developing a web application that avails of a suite of REST tests built for Postman.
The idea is that you can run the tests manually with Postman as a REST client against the application runtime, and the Maven pom configures newman to run them automatically in the CI pipelines as integration tests when ever a build is triggered.
This has run fairly well in the past.
Requirement
However, due to an overhaul in business logic, many of those tests now require a binary body as a file resource in POST requests (mostly zip archives).
I need those tests to work in 3 scenarios:
Manually when running individual tests with Postman locally, against a runtime of the web application
Semi-manually as above, but by triggering a runner in Postman
Automatically, when newman is started by maven during the integration tests phase of our pipeline
In order to make sure the path to the file in each request would work regardless of the way the tests are run, I have added a Postman environment variable in each profile. The variable would then be used by the collection in the relevant requests, e.g.:
"body": {
"mode": "file",
"file": {
"src": "{{postman_resources_path}}/empty.zip"
}
},
The idea would be that:
locally, you manually overwrite the value of postman_resources_path in the profile, in order to point to an absolute path in your machine (e.g. simply where you have the resources in source control) - this would then resolve it both for manual tests and a local runner
for the CI pipelines, the same would apply with a default value pointing to a path relative to the --working-dir value, which would be set in the newman command-line parametrization in the exec-maven-plugin already in use to run newman
Problem
While I haven't had a chance to test the pipeline yet with those assumptions, I can already notice that this isn't working locally.
Looking at a request, I can see the environment variable is not being resolved:
Conversely, here is the value that I manually set in the profile I'm running the request against:
TL;DR The request fails, since the resource is not found.
The most relevant literature I've found does not address my use case entirely, but the solution given seems to follow a similar direction: "variabilize" the path - see here.
I could not find anything specific enough in the Postman reference.
I think I'm onto something here, but I won't accept my own answer yet.
TL;DR it may be simpler than it seemed initially.
This Postman doc page states:
When you send a form-data or binary file with a request body, Postman saves a path to the file as part of the collection. The file path is relative to your working directory.
If I modify the raw collection json to ensure only the file name (or any relative path) is the value of the "src" key in the file definition, and set up the working directory manually in my Postman client, it seems to resolve the file correctly --> no need for (non-working) variables in the file path.
The working directory setting does not seem to be saved in the collection, meaning a manual one-time setup for local clients and the usage of --working-dir with newman should do the trick altogether.
Will self-accept once I've successfully tested with newman.

How to mount a file via CloudFoundry manifest similar to Kubernetes?

With Kubernetes, I used to mount a file containing feature-flags as key/value pairs. Our UI would then simply get the file and read the values.
Like this: What's the best way to share/mount one file into a pod?
Now I want to do the same with the manifest file for CloudFoundry. How can I mount a file so that it will be available in /dist folder at deployment time?
To add more information, when we mount a file, the UI later can download the file and read the content. We are using React and any call to the server has to go through Apigee layer.
The typical approach to mounting files into a CloudFoundry application is called Volume Services. This takes a remote file system like NFS or SMB and mounts it into your application container.
I don't think that's what you want here. It would probably be overkill to mount in a single file. You totally could go this route though.
That said, CloudFoundry does not have a built-in concept that's similar to Kubernetes, where you can take your configuration and mount it as a file. With CloudFoundry, you do have a few similar options. They are not exactly the same though so you'll have to make the determination if one will work for your needs.
You can pass config through environment variables (or through user-provided service bindings, but that comes through an environment variable VCAP_SERVICES as well). This won't be a file, but perhaps you can have your UI read that instead (You didn't mention how the UI gets that file, so I can't comment further. If you elaborate on that point like if it's HTTP or reading from disk, I could perhaps expand on this option).
If it absolutely needs to be a file, your application could read the environment variable contents and write it to disk when it starts. If your application isn't able to do that like if you're using Nginx, you could include a .profile script at the root of your application that reads it and generates the file. For example: echo "$CFG_VAR" > /dist/file or whatever you need to do to generate that file.
A couple of more notes when using environment variables. There are limits to how much information can go in them (sorry I don't know the exact value off the top of my head, but I think it's around 128K). It is also not great for binary configuration, in which case, you'd need to base64 encode your data first.
You can pull the config file from a config server and cache it locally. This can be pretty simple. The first thing your app does when it starts is to reach out and download the file, place it on the disk and the file will persist there for the duration of your application's lifetime.
If you don't have a server-side application like if you're running Nginx, you can include a .profile script (can be any executable script) at the root of your application which can use curl or another tool to download and set up that configuration.
You can replace "config server" with an HTTP server, Git repository, Vault server, CredHub, database, or really any place you can durably store your data.
Not recommended, but you can also push your configuration file with the application. This would be as simple as including it in the directory or archive that you push. This has the obvious downside of coupling your configuration to the application bits that you push. Depending on where you work, the policies you have to follow, and the tools you use this may or may not matter.
There might be other variations you could use as well. Loading the file in your application when it starts or through a .profile script is very flexible.

How to obtain output files from cloudfoundry that deployed application has created?

I have deployed a windows exe on cloudfoundry, now i want to get back the files from cloudfoundry that exe has generated.
Is there any way of retrieving files using a batch script?
It's important to note that you don't write anything of critical to the local file system. The local file system is ephemeral so if your app crashes/restarts for any reason before you obtain the files then they are gone.
Having said that, if you have an application that is doing some processing and generating output files, the recommended way to handle this would be to have the process write those files somewhere external and durable, like a database, SFTP server or S3-compatible service. In short, push the files out somewhere, instead of trying to get into the container to obtain the files.
If you must pull them out of the container, you have a couple options:
Run a small HTTP server in the container & expose the files on it. Make sure you are appropriately securing the server to prevent unauthorized access to your files.
Run scp or sftp and download the files. Instructions for doing that can be found here, but roughly speaking you run cf app app-name --guid to get the app guid, cf ssh-code to get a passcode, then scp -P 2222 -oUser=cf:<insert app-guid>/0 ssh.system_domain:my-remote-file.json ./ to get the file named my-remote-file.json. Obviously, change the path/file to what you want to download.

How data gets into HDFS files system

I am trying to understand how data from multiple sources and systems gets into HDF? I want to push web server log files form 30+ systems. These logs are sitting on 18 different servers.
Thx
Veer
You can create a map-reduce job. The input for your mapper would be a file sitting on a server, and your reducer would deduct to which path to put the file in hdfs. You can either aggregate all of your files in your reducer, or simply write the file as is at the given path.
You can use Oozie to schedule the job, or you can run it sporadically by submitting the map-reduce job on the server which hosts the job tracker service.
You could also create an java application that uses the hdfs api. The FileSystem object can be used to do standard file system operation, like writing a file to a given path.
Either way, you need to request the creation through hdfs api, because the name node is responsible for splitting the file in blocks and writing it on distributed servers.