I'm started using postman recently, I can see response of my content like this
Where as when I save the response into a file on my local system it looks like this
I can see from locally that file size is ~10KB where as the same response on postman show it as ~6.61KB, in what sense is this huge difference possible?
EDIT
The number given by postman doesn't match with no.of characters in the file.
C02SH03Q:~ pvangala$ wc test/2.txt
256 523 9690 test/2.txt
This is due to auto formatting of raw text to JSON content, as a result the size of the content increases.
Related
I'm running JMeter 5.4 (via Jenkins) to work through a long list of URLs (from a .txt file) in order to check that they have a 200/301 Status Code.
When I run the test, some of them fail, so what I'd like to do is somehow extract any URLs that have a 500 status code, and then output these (just the 500 status code URLs) to a separate csv file so I can easily see what URLs fail.
I would also like to be able to view this (500 failure) new csv file in Jenkins (I have the performance module up and running), but think I'll try and walk before I run! :)
Is this possible, and if so how would I go about extracting 500 status code URLs in JMeter?
Any help would be greatly appreciated.
You can use a Listener like Simple Data Writer in order to store the failed requests URLs into a file, example configuration for Simple Data Writer:
You can use a JSR223Post processor to write the URLs to a CSV file.
Add a JSR223_PostProcessor to your HTTP Request as a child element
Inside the script area check the response code (=='500') and write the URL
println("Before checking the response code ")
if (prev.getResponseCode().equalsIgnoreCase("500")) {
//print
println("Start writing to file ")
FileWriter fstream = new FileWriter("failed-urls.csv",true);
fstream.write(vars.get('URL')+"\n");
fstream.close();
}
Methods available from the previous sample result (prev variable) can be found in API documentation.
JSR223 Post Processor need to be places as a child element to the HTTP Request
I am creating the following anchor tag dynamically, to download the file I receive from flask backend. The url in a.href is always constant. But the content of output.mp4 keeps on changing.
However, the content of file I get on a.click() is not changing. The file I get is the one I created atleast 3-4 hours ago. How do I get the updated file, on each a.click() call?
var a = document.createElement('a')
a.href = 'http://localhost:5000/download/output'
a.setAttribute('download', 'output.mp4')
a.click()
This is almost 100% to do with the cache setup on the backend.
A simple solution would be to append a cache breaking flag on the output, such as
a.setAttribute('download', 'output.mp4?cachebuster=' + Date.now())
Why does this function work on a direct url to a download however fail on a php page echoing out a file for download? (GetLastError is 0)
Not all HTTP requests will have a content length field in the response. Dynamic pages generated by PHP scripts might not know how large the content actually is.
In these cases you need just need to read a little bit at the time until there is no more data returned from the server.
I am trying to test a webservice's performance, and having a few issues with using and passing variables. There are multiple sequential requests, which depend on some data coming from a previous response. All requests need to be encoded to base64 and placed in a SOAP envelope namespace before sending it to the endpoint. It returns and encoded response which needs to be decoded to see the xml values which need to be used for the next request. What I have done so far is:
1) Beanshell preprocessor added to first sample to encode the payload which is called from a file.
2) Regex to pull the encoded response bit from whole response.
3) Beanshell post processor to decode the response and write to a file (just in case). I have stored the decoded response in a variable 'Output' and I know this works since it writes the response to file correctly.
4) After this, I have added 4 regex extractors and tried various things such as apply to different parts, check different fields, check JMeter variable etc. However, it doesn't seem to work.
This is what my tree is looking like.
JMeter Tree
I am storing the decoded response to 'Output' variable like this and it works since it's writing to file properly:
import org.apache.commons.io.FileUtils;
import org.apache.commons.codec.binary.Base64;
String Createresponse= vars.get("Createregex");
vars.put("response",new String(Base64.decodeBase64(Createresponse.getBytes("UTF-8"))));
Output = vars.get("response");
f = new FileOutputStream("filepath/Createresponse.txt");
p = new PrintStream(f);
this.interpreter.setOut(p);
print(Output);
f.close();
And this is how I using Regex after that, I have tried different options:
Regex settings
Unfortunately though, the regex is not picking up these values from 'Output' variable. I basically need them saved so i can use ${docID} in the payload file for next request.
Any help on this is appreciated! Also happy to provide more detail if needed.
EDIT:
I had a follow up question. I am trying to run this with multiple users. I have a field ${searchuser} in my payload xml file called in the pre-processor here.
The CSV Data set above it looks like this:
However, it is not picking up the values from CSV and substituting in the payload file. Any help is appreciated!
You have 2 problems with your Regular Expression Extractor configuration:
Apply to: needs to be response
Field to check: needs to be Body, Body as a Document is being used for binary file formants like PDF or Word.
By the way, you can do Base64 decoding and encoding using __base64Decode() and __base64Encode() functions available via JMeter Plugins. The plugins in their turn can be installed in one click using Plugin Manager
I'm using the requests library and python 2.7 to download a gzipped text file from a web api. Using the code below, I'm able to successfully send a get request and, judging from the headers, receive a response in the formed of the gzip file.
I know Requests decompresses these files for you automatically if it detects from the header that the response is gzipped. I wanted to take that download in the form of a file stream and write the contents to disk for storage and future analysis.
When I get open the resulting file in my working directory however I get characters like this: —}}¶— Q#Ï 'õ
For reference, some of the response headers include 'Content-Encoding': 'gzip', 'Content-Type': 'application/download', 'Accept-Encoding,User-Agent'
Am I wrong to write in binary? Am I not encoding the text correctly(ie. could it be ASCII vs utf-8)? There is no apparent character encoding noted in the response headers.
try:
response = requests.get(url, paramDict, stream=True)
except Exception as e:
print(e)
with open(outName, 'wb') as out_file:
for chunk in response.iter_content(chunk_size=1024):
out_file.write(chunk)
EDIT 3.30.2016:
Now I've changed my code a little bit to utilize gzipstream library. I tried using the stream to read the entirety of the Gzipped text file that is in my response content:
with open(outName, 'wb') as out_file, GzipStreamFile(response.content) as fileStream:
streamContent = fileStream.read()
out_file.write(streamContent)
I then received this error:
out_file.write(streamContent)
AttributeError: '_GzipStreamFile' object has no attribute 'close'
The output was an empty text file with the file name as anticipated. Do I need to initialize my streamContent variable outside of the with block so that it doesn't automatically try to call a close method at the end of the block?
EDIT 4.1.2016 Just thought I'd clarify that this DOES NOT have to be a stream, that was just one solution I encountered. I just want to make a daily request for this gzipped file and have it saved locally in plaintext
try:
response = requests.get(url, paramDict)
except Exception as e:
print(e)
data = zlib.decompress(response.content, zlib.MAX_WBITS|32)
with open('outFileName.txt','w') as outFile:
outFile.write(data)
Here is the code that I wrote that ended up working. It is as sigmavirus said: the file was gzipped to begin with. I knew this fact, but did not describe it clearly enough apparently as I kept read/writing the gzipped bytes.
Using the zlib module, I was able to decompress the content of the response all at one time into the data variable; I then wrote that variable containing the decompressed data into a file.
I'm not sure if this is the best or most pythonic way to do this, but it worked. If anyone can enlighten me as to why I cannot gzip.open this content (perhaps I needed to use an alternative method, I tried gzipstream library to no avail), I would appreciate any explanations, but I do consider this question answered.
Thanks to everyone who helped me, even if you didn't have the solution, you helped encourage me to persevere!
So the combination here of stream=True and iter_content is what is causing your problems. What you might want to do is something akin to this (to preserve the streaming behaviour):
try:
response = requests.get(url, params=paramDict, stream=True)
except Exception as e:
print(e)
raw = response.raw
with open(outName, 'wb') as out_file
while True:
chunk = raw.read(1024, decode_content=True)
if not chunk:
break
out_file.write(chunk)
Note that you still want to use bytes because you haven't determined the character encoding of the content so you still have bytes but you're no longer dealing with the gzipped bytes.
You are requesting the raw socket stream which is stripping of the chunk transfer encoding but leaving the content coding intact. In other words: What you've got there is pretty certainly the gzipped content. The presence of the Content-Encoding: gzip header is a strong indicator for that, as http clients are required to remove it should they remove the content coding.
One way to eliminate this would be to send an empty Accept-Encoding header among the request to indicate no encoding were acceptable. If the API is RFC compliant, you should receive an uncompressed response. The other way would be to decompress the stream yourself. I believe this cannot be done natively by the gzip and zlib modules. However, the gzipstream lib should give you a start.