This would seem to be a simple question, but not sure how to best set it up.
I have a few test cases for the same endpoint.
I want to just pass different values for the various {{variables}}.
I know I can use pm.globals.set('..') and other ways to modify the env during testing, but I don’t want to basically code up my tests in JS, or use Newman. Also want to be able to easily share tests.
I’m assuming there must be somewhere in the UI (maybe test runner?) to say - run the same test and endpoint, but changing out these values, and expect different results. eg
/login
userId = “{{returningUser}}” => expect success
userId = “{{bannedUserId}}” => expect fail
userId = “{{unknownId}}” => expect fail
etc
Maybe I could script that up, but then I'd also have to use code to "call" the API to re-load the request. Seems like just writing jest tests in a clunky UI at that point.
I see two possibilites for testing the same endpoint with different data:
Either you use the collection runner with a data file, or
You use the newman command line runner, using different environment files.
For sharing tests, you can either use Postman's integrated cloud stuff (never tried that), or export collections and environments and put them into a (git) repository. We're doing the latter. Yes, it's a bit cumbersome, but it works.
Related
Is there a not too dirty way to detect at runtime, whether the code was started with lein test? I just want to select a different redis database, so solutions like environ or using different resource files seem to be a bit overkill.
For example, leiningen automatically enables the test profile, but I haven't found a way to get a list of currently enabled profiles.
There is no simple way to do it. Neither lein.test nor clojure.test expose such information. Even if you find a way to hack into some private var of lein test or clojure.test and check it to determine if your code is run as part of lein test.
However, it would have a very big issue: your production code would need to require testing library code (e.g. clojure.test) or even worse your build tool code (lein test plugin code).
You might try to define such configuration var (dynamic or not) in your production code and set it in your tests using fixtures.
The best solution would be to configure your application dynamically based on the external variable like system property or environment variable (e.g. by using suggested environ). This way you can have as many different configuration sets as you need (e.g. prod vs unit test vs integration test vs performance tests and so on) and not just two (prod vs test).
It might seem like overkill, but component for instance is invented for exact usecases like this. Or dependency injection in general.
I know that feeling, it's just a private project, no need for difficult stuff etc. Thats why I put together my own template so that all I need to get started is run lein new ...
This is my solution to circumvent the "just want to select a different redis database" usecases.
Edit It is a template for a web framework: https://github.com/sveri/closp but a lot of these parts are not specific to web dev, especially the components part: https://github.com/sveri/closp/tree/master/resources/leiningen/new/closp/clj/components
There is also an integration test where I make use of test components specifically: https://github.com/sveri/closp/blob/master/resources/leiningen/new/closp/integtest/clj/web/setup.clj
I found a way with Cprop. Set a var in your "env/{test|prod|test}/config.clj" file:
(System/setProperty "lein.profile" "dev")
then you can read the value:
(require '[cprop.source :as source])
(str "from-system-props: >> " (:lein-profile (source/from-system-props)))
other option is to search for the key ":conf" in the system-props:
:conf "test-config.edn"
because the config file changes according to the profile.
I'm writing tests for a React application which makes use of Fluxxor to provide an event dispatcher. Making that work requires telling Jest not to mock a few modules which are used internally, and are provided by Node itself.
That means I can't just add them to the unmockedModulePathPatterns config key, and instead have to use some code like this:
[ 'util', 'events' ].forEach(function (module) {
jest.setMock(module, require.requireActual(module));
});
However, I can't find anywhere useful to put it. I've got a setupEnvScriptFile which sets up a few globals that I use in almost all my tests, but the jest object doesn't seem to be available in that context, so I can't just set the mocks there.
As a hacky stopgap measure I've wrapped the code above in a function which I call at the beginning of any describe blocks testing Fluxxor stores, but its far from ideal.
Have you tried config.setupTestFrameworkScriptFile? Seems like it would be the right place to monkey patch the api, as per the docs.
It seems that the answer, at least currently, is "you can't in this case", but there are issues open for the two changes that need to be made to support it.
https://github.com/facebook/jest/issues/106
https://github.com/facebook/jest/issues/107
FWIW, Here's a solution that we have been using to add Fluxxor and React-Router support to our test specs.
https://gist.github.com/adjavaherian/a15ef0461e65d58aacd2
I am trying to build a test program in c++ to automate testing for a specific application. The testing will involve sending requests which have a field 'CommandType' and some other fields to a server
The commandType can be 'NEW', 'CHANGE' or 'DELETE'
The tests can be
Send a bunch of random requests with no pattern
Send 100 'NEW' requests, then a huge amount of 'CHANGE' requests followed by 200 'DELETE' requests
Send 'DELETE' requests followed by 'CHANGE' requests
... and so on
How can I design my software (what kind of modules or layers) so that adding any new type of test case is easy and modular?
EDIT: To be more specific, this test will be to only test one specific application that gets requests of the type described above and handles them. This will be a client application that will send the requests to the server.
I would not create your own framework. There are many already written that follow a common pattern and can likely accomodate your needs elegantly.
The xUnit framework in all incarnations I have seen allows you to add new test cases without having to edit the code that runs the tests. For example, CppUnit provides a macro that when added to a test case will auto-register the test case with a global registry (through static initialization I assume). This allows you to add new test cases without cracking open and editing the thing that runs them.
And don't let the "unit" in xUnit and CppUnit make you think it is inappropriate. I've used the xUnit framework for all different kinds of testing.
I would separate out each individual test into it's own procedure or, if it requires code beyond a function or two, it's own source file. Then in my main routine I'd do something like:
void main()
{
run_test_1();
run_test_2();
//...
run_test_N();
}
Alternatively, I'd recommend leveraging the Boost Test Library and following their conventions.
I'm assuming you're not talking about creating unit tests.
IMHO, Your question is too vague to provide useful answers. Is this to test a specific application or are you trying to make something generic enough to test as many different applications as is possible? Where do these applications live? Are they client server apps, web apps, etc.?
If it's more than one application that you want your tool to test, you'll need an architecture that creates a protocol in between the testing tool and the applications such that you can convert the instructions your tool and consumers of your tool can understand, into instructions that the application being tested can understand. I've done similar things in the past but I've only ever had to worry about maybe 5 different "applications" so it was a pretty simple matter of summing up all the unique functionality of the apps and then creating an interfact that supports them all.
I wouldn't presume that NEW, CHANGE, and DELETE would be your only command types either. A lot of testing involves data cleanup, test reporting, etc. And applications all handle this their own special ways.
use C++ unit testing framework , Read this for Detail and examples
I am working on a project using the Active Directory, intensively. I set up a few unit tests for several things against the AD, some of which I achieve using mocked objects, some which I achieve through real calls against the AD.
As one of the functions of my project, I have to retrieve a so called "user profile". This user profile consists mostly of simple attributes, like "cn", "company", "employeeid", etc. However, one property that I am trying to fill is not a simple one "NextPasswordChangeDate".
To the best of my knowledge, the only way to get this, is by getting the domain policy's maxPwdAge and use this information together with pwdLastSet.
Now my question: How can I unit test this in an intelligent way? I came up with three options, all of which are not great:
Use my own account as the searched account, find out the date by other means and hard code it in the unit test. By this way, I can unit test my code well, but every month, I have to change the unit test, because I changed my password.
Use some account that has password never expires set. This is kind of pointless, because I cannot really test the correctness of my code by that.
Use a mock object and make sure that the correct API calls happen. This option allows to test the correctness of the function's behaviour, but then the tested logic is in fact in the unit test and hence I cannot be sure, that it is doing the right thing, even if the test is passed.
Which of the three do you suggest? Or maybe you have a better option?
Since 1 and 2 the rely on AD existing and having known values seem more like integration tests to me.
I generally take the side that any non-deterministic behavior should be interfaced out and mocked if possible (#3). As you noted this will always leave an amount of real implementation code that is not unit-testable, but would then be covered by your integration tests running against a known AD system.
Related Question/Answer
I'm setting up some Selenium tests for an internal web app and looking for advice on a testing 'best practice'. One of the tests is going to add some data via the UI that cannot be removed via the UI (e.g., you can add a record via the web app, but removing requires contacting someone internally to remove it at the database level). How do you typically account for cleaning up data after the Selenium test is run?
The app in question is written in PHP and I'm using PHP for testing (with Selenium RC and SimpleTest), but I'm open to other tools, etc. as this is just a broad best practice question. The app being tested is in our development environment, so I'm not particularly worried about data carrying over from tests.
Some ideas:
Manually connect to the database in the Selenium test to clean up the data
Use something like DBUnit to manage this?
Just add data and don't worry about cleaning it up (aka, the lazy approach)
Thanks!
Edit: Seems most of the ideas centered around the same conclusion: work off a known set of data and restore when the tests are finished. The mechanism for this probably will vary depending on language, an amount of data, etc. but this looks like it should work for my needs.
I use Selenium with a Rails application, and I use the fixture mechanism to load and unload data from the test database. It's similar to the DbUnit approach, though I don't unload and reload between tests due to the volume of data. (This is something I'm working on, though.)
We have a web front end to a database restore routine. First thing our tests do is restore a "well known" starting point.
Point the webapp to a different database instance that you can wipe when you are done with the tests. Then you will have the database to inspect after the tests have run if you need to debug, and you can just blow away all the tables when you are done. You could get an export of the current database and restore it into your fresh instance before the tests if you need seed data.
Avoid the lazy approach. It's no good and will ultimately fail you. See my previous response on this topic in this separate StackOverflow question.
Agree with the other answers here. I've wired in Selenium and DBUnit tests to the past 3 projects I've worked on. On the first project we tried the lazy approach, but predictably it fell in a heap, so we used DBUnit and I've not looked back.
I realize you are using PHP, so please translate DBUnit/JUnit to your PHP equivalents.
A couple of points:
Use as little data as possible. With
many selenium tests running, you want
the DBUnit load to be as quick as
possible. So try to minimize the
amount of data you are loading.
Only load the data that changes. Often
you can skip tables which are never
changed by the web app. Ref data
tables and so on. However you might
want to create a seperate DBUnit xml
file/db backup to load this data in
case you accidentally lose it.
Let the JUnit selenium tests choose
whether they need a reload. Some Selenium tests will not change any
data, so there is no point reloading
the database after they run. In each of my selenium tests I
override/implement a method to return the desired DBUnit behavior.
#Override
protected DBUnitRunConfig getDBUnitRunConfig() {
return DBUnitRunConfig.RUN_ONCE_FOR_THIS_TEST_CASE;
}
(Couldn't get that snippet to format correctly.) Where DBUnitRunConfig is:
public enum DBUnitRunConfig {
NONE,
RUN_IF_NOT_YET_RUN_IN_ANY_TEST_CASE,
RUN_ONCE_FOR_THIS_TEST_CASE,
RUN_FOR_EACH_TEST_IN_TEST_CASE
};
This cuts down the time required to get through the tests. The Selenium enabled super class (or helper class) can then run, or not run, DBUnit for the given tests.