Uncaught exception java.lang.stackoverflowerror using JMeter - concurrency

I want to ask about the result when I execute JMeter using CLI mode.
When I'm trying to execute with JMeter in my computer, it's fine, but when I try to execute in CLI mode, it shows this error:
Until now, I still don't know what happened :(
here's the text in the log file
this is the thread group
and here the controller

As the error suggests you should check jmeter.log file for any suspicious entries.
In the majority of cases StackOverFlow error occurs in JMeter when you have an issue with code/condition evaluation, i.e. check your:
If Controllers
While Controllers
JMeter Functions
all places where the code or function or condition is being evaluated
If jmeter.log file doesn't tell the full story you can increase logging verbosity

Related

Integration point of Postman Clean up

Is there a way to incorporate a clean up script in Postman?
Use case: After the collection run : (either success or failure). I need to clear data in some of the databases/data-stores
similar construct to try{} finally{}
for eg : as a part of collection runner contains two apis
api1 -> which puts the data in redis.
api2 -> functional verification
(expecting the clean up hook) to remove the data from that was put in step 1.
writing at the end of test script of api2 will work fine only if there are no errors in execution of test script.
the problem gets worse when there are large number of apis and multiple data entries. We can handle this by setNextRequest, however that brings additional code to be written in each test script.
You could probably achieve this by running the collection file within a script, using Newman. This should give you more flexibility and control over running certain actions at different points before, during and after the run.
More information about the different options can be found here: https://github.com/postmanlabs/newman/blob/develop/README.md#api-reference
If its just clearing out certian variable values, this can be done within the Tests tab of the last request in your collection.

How to debug internal server errors in eclipse when coding a django project?

I have a lot of trouble in debugging an ajax view. That's a view which expects a post request and then returns a json object. It causes an internal error 500, but eclipse doesn't give more information. The standard debug page cannot be accessed, because the view redirects if there is no post data.
What is the best approach to tackle these problems? Can I get eclipse/pydev to just tell me what the internal error 500 exactly is? Or do I really have to get a browser plugin and construct POST Data? (Which might be difficult, because a file upload is involved.)
Effectively I'm looking for a way to get the exception message in the console. Currently it just says:
[16/Feb/2015 17:38:03] "POST /fotos/upload/ HTTP/1.1" 500 10907
Which is not a big help.
Important: This question is about how to make debugging easier and not about fixing this particular view. So no need to ask for code or logfiles of that view. It's a general question about how to go ahead.
Thank you for your time!
Internal server errors can be generated in many ways. One of the most prominent ways is having a syntax error in the server code. The syntax error can range from typos to incorrect indentation(in python). Try debugging your python code. Try to find the point of error and see if there is a misspell or indentation error like using tabs instead of spaces(or vice-versa).
Also, if you're running Django in debug mode, open the link giving the error in a browser, and it will directly show you if there are any compile time errors in the code.
EDIT: And I totally missed the part where you mentioned that "the standard debug page cannot be accessed". Well in that case I'd resort to using standard print statements for debugging and check server logs for the point of failure.
Put a breakpoint (click on the left of the line or just tight click add Breakpoint to the spot). Once the source code gets to the point a debugging menu is opened containing:
-Variables and their values
-All your breakpoints

troubleshooting cascalog

i am a new Clojure/Cascalog user trying to migrate some pig scripts, but often i get an error like the following in repl.
FlowException local step failed cascading.flow.planner.FlowStepJob.blockOnJob (FlowStepJob.java:191)
"with-debug" gives some more information but still no root cause of the issue. Any ideas on how to improve this?
I agree that the stacktraces are sometimes very unhelpful. One thing I can suggest is writing unit tests: http://sritchie.github.com/2011/09/30/testing-cascalog-with-midje.html which narrows down significantly where your actual problem lies.
If your query works for basic cases but fails on big data you can add a trap, to see what inputs are causing a failure:
(<- .... (:trap (hfs-textline "s3://.../errors" :sinkmode :replace)))

How to keep the unit test output in Jenkins

We have managed to have Jenkins correctly parse our XML output from our tests and also included the error information, when there is one. So that it is possible to see, directly in the TestCase in Jenkins the error that occurred.
What we would like to do is to have Jenkins keep a log output, which is basically the console output, associated with each case. This would enable anyone to see the actual console output of each test case, failed or not.
I haven't seen a way to do this.
* EDIT *
Clarification - I want to be able to see the actual test output directly in the Jenkins interface, the same way it does when there is an error, but for the whole output. I don't want only Jenkins to keep the file as artifact.
* END OF EDIT *
Anyone can help us on this?
In the Publish JUnit test result report (Post-build Actions) tick the Retain long standard output/error checkbox.
If checked, any standard output or error from a test suite will be
retained in the test results after the build completes. (This refers
only to additional messages printed to console, not to a failure stack
trace.) Such output is always kept if the test failed, but by default
lengthy output from passing tests is truncated to save space. Check
this option if you need to see every log message from even passing
tests, but beware that Jenkins's memory consumption can substantially
increase as a result, even if you never look at the test results!
This is simple to do - just ensure that the output file is included in the list of artifacts for that job and it will be archived according to the configuration for that job.
Not sure if you have solve it yet, but I just did something similar using Android and Jenkins.
What I did was using the http://code.google.com/p/the-missing-android-xml-junit-test-runner/ to run the tests in the Android emulator. This will create the necessary JUnit formatted XML files, on the emulator file system.
Afterwards, simply use 'adb pull' to copy the file over, and configure the Jenkin to parse the results. You can also artifact the XML files if necessary.
If you simply want to display the content of the result in the log, you can use 'Execute Shell' command to print it out to the console, where it will be captured in the log file.
Since Jenkins 1.386 there was a change mentioned to Retain long standard output/error in each build configuration. So you just have to check the checkbox in the post-build actions.
http://hudson-ci.org/changelog.html#v1.386
When using a declarative pipeline, you can do it like so:
junit testResults: '**/build/test-results/*/*.xml', keepLongStdio: true
See the documentation:
If checked, the default behavior of failing a build on missing test result files or empty test results is changed to not affect the status of the build. Please note that this setting make it harder to spot misconfigured jobs or build failures where the test tool does not exit with an error code when not producing test report files.

CPP unit setup for C++

In CPP unit we run unit test as part of build as part of post build setup. We will be running multiple tests as part of this. In case if any test case fails post build should not stop, it should go ahead and run all the test cases and should report summary how many test cases passed and failed. how can we achieve this.
Thanks!
His question is specific enough. You need a test runner. Encapsulate each test in its own behavior and class. The test project is contained separately from the tested code. Afterwards just configure your XMLOutputter. You can find an excellent example of how to do this in the linux website. http://www.yolinux.com/TUTORIALS/CppUnit.html
We use this way to compile our test projects for our main projects and observe if everything is ok. Now it all becomes the work of maintaining your test code.
Your question is too vague for a precise answer. Usually, a unit test engine return a code to tell it has failed (like a non zero return code in the shell on linux) or generate some output file with results. The calling system handle this. If you have written it (some home made scripts) you have to give the option to go on tests execution even if an error occurred. If you are using some tools like continuous integration server, then you have to go through the doc and find the option that allows you to go on when tests fails.
A workaround is to write a script that return a "OK" result even if the unit test fails, but there you lose some automatic verification ...
Be more specific if you want more clues.
my2c
I would just write your tests this way. Instead of using the CPPUNIT_ASSERT macros or whatever you would write them in regular C++ with some way of logging errors.
You could use a macro for this too of course. Something like:
LOGASSERT( some_expression )
could be defined to execute some_expression and to log the expression together with FILE and LINE if it fails, and you can also log exceptions of course, as well as ones that are not thrown, simply by writing them in your tests (with macros if you want to log the expression that caused them with FILE and LINE).
If you are writing macros I would advise you to limit the content of your macro to calling an inline function with extra parameters.