Every time I restart Postman and start a collection run, the order of the endpoints gets reset and I have to drag/drop them into the correct order for the run. Can this ordering be saved?
Drag and drop 1 time and export this collection.
This is an annoying little problem when you want to re-run your test runners.
I solved this by duplicating my collection into a new folder.
Then I arranged the requests in the correct order. Now when you start the test runner using this collection, the order is preserved.
However, it's not the ideal solution, because now you have to maintain two collections of the same things, but perhaps not in the same folder structure.
The only viable solution was if test runners could be saved as a part of your Postman collection.
Let me know if I missed something, this was just my experience.
Related
I would like to create functional tests to cover my Django project. It is a multipage form that accepts input on each page. The input on previous pages changes the content/options for the current pages. The testing is currently set up with Splinter and PhantomJS. I see two main ways to set this up.
For each page, create an instance of that page using stored data and point Splinter towards that.
Benefits
Allows random access testing of any page in the app
Can reuse unit test definitions to populate these synthetic pages
Downsides
Needs some kind of back end to be set up that Splinter can point to (At this point I have no idea how this would work, but it seems time consuming)
Structure the testing so that it is done in order, with the test content of page 1 passed to page 2
Benefits
Seems like it should just work out of the box
Downsides
Does not allow the tests to be run in an arbitrary order/only one at a time
Would probably take longer to run
Errors on earlier pages will affect later pages
I've found many tutorials on how to do functional testing on a small scale (individual pages/features/etc.) but I'm trying to figure out if there is an accepted way or best practice on how to structure it over a large project. Is it one of these? Something else that I've missed?
What I was looking for was fixtures (https://docs.djangoproject.com/en/1.11/ref/django-admin/#django-admin-dumpdata). Things just get way too complex if you're trying to pass browser state between a whole project worth of tests. Easy to grab the DB state, easy to load in.
I researched up and down and I'm not seeing any answers that I'm quite understanding so I thought to post my own question.
I building a web application (specifically in web2py but that shouldn't matter I don't believe) on Python 2.7 to be hosted on Windows.
I have a list of about 2000 items in a database table.
The user will be opening the application which will initially select all 2000 items into Python and return the list to the user's browser. After that the user will be filtering the list based on one-to-many values of one-to-many attributes of the items.
I'm wanting Python to hold the unadulterated list of 2000 items in-memory between the user's changes of filtering options.
Every time the user changes their filter options,
trip the change back to Python,
apply the filter to the in-memory list and
return the subset to the user's browser.
This is to avoid hitting the database with every change of filter options. And to avoid passing the list in session over and over.
Most of this I'm just fine with. What I'm seeking you advise on is how to get Python to hold the list in-memory. In c# you would just make it a static object.
How do you do a static (or whatever other scheme that applies) in Python?
Thanks for your remarks.
While proofreading this I see I'm still probably passing at least big portions of the list back and forth anyway so I will probably manage the whole list in the browser.
But I still like to hear you suggestions. Maybe something you say will help.
As you seem to conclude, there isn't much reason to be sending requests back and forth to the server given that the server isn't generating any new data that isn't already held in the browser. Just do all the filtering directly in the browser.
If you did need to do some manipulation on the server, though, don't necessarily assume it would be more efficient to search/filter a large dataset in Python rather than querying the database. You should do some testing to figure out which is more efficient (and whether any efficiency gains are worth the hassle of adding complexity to your code).
I have been writing a lot of unit tests for the code I write. I've just started to work on a web project and I have read that WatiN is a good test framework for the web.
However, I'm not exactly sure what I should be testing. Since most of the web pages I'm working on are dynamic user generated reports, do I just check to see if a specific phrase is on the page?
Besides just checking if text exists on a page, what else should I be testing for?
First think of what business cases you’re trying to validate. Ashley’s thoughts are a good starting point.
You mentioned most pages are dynamically generated user reports. I’ve done testing around these sorts of things and always start by figuring out what sort of baseline dataset I need to create and load. That helps me ensure I can get exactly the appropriate set of records back in the reports that I expect if everything's working properly. From there I’ll write automation tests to check I get the right number of records, the right starting and ending records, records containing the right data, etc.
If the reports are dynamic then I’ll also check whether filtering works properly, that sorting behaves as expected, etc.
Something to keep in mind is to keep a close eye on the value of these tests. It may be that simply automating a few tests around main business use cases may be good enough for you. Handle the rest manually via exploratory testing.
You essentially want to be testing as if you are a user entering your site for the first time. You want to make sure that every aspect of your page is running exaclty the way you want it to. For example, if there is a signup/login screen, automate those to ensure that they are both working properly. Automate the navigation of various pages, using Assertions just to ensure the page loaded. If there are generated reports, automate all generations and check the text on the generations to ensure it is what you specified by the "user" (you). If you have any logic saying for example when you check this box all other boxes should check aswell. There are many assertions that can be applied, I am not sure what Unit-Testing software you are using but most have a very rich assortment.
I am currently faced with the task of importing around 200K items from a custom CMS implementation into Sitecore. I have created a simple import page which connects to an external SQL database using Entity Framework and I have created all the required data templates.
During a test import of about 5K items I realized that I needed to find a way to make the import run a lot faster so I set about to find some information about optimizing Sitecore for this purpose. I have concluded that there is not much specific information out there so I'd like to share what I've found and open the floor for others to contribute further optimizations. My aim is to create some kind of maintenance mode for Sitecore that can be used when importing large columes of data.
The most useful information I found was on Mark Cassidy's blogpost http://intothecore.cassidy.dk/2009/04/migrating-data-into-sitecore.html. At the bottom of this post he provides a few tips for when you are running an import.
If migrating large quantities of data, try and disable as many Sitecore event handlers and whatever else you can get away with.
Use BulkUpdateContext()
Don't forget your target language
If you can, make the fields shared and unversioned. This should help migration execution speed.
The first thing I noticed out of this list was the BulkUpdateContext class as I had never heard of it. I quickly understood why as a search on the SND forum and in the PDF documentation returned no hits. So imagine my surprise when i actually tested it out and found that it improves item creation/deletes by at least ten fold!
The next thing I looked at was the first point where he basically suggests creating a version of web config that only has the bare essentials needed to perform the import. So far I have removed all events related to creating, saving and deleting items and versions. I have also removed the history engine and system index declarations from the master database element in web config as well as any custom events, schedules and search configurations. I expect that there are a lot of other things I could look to remove/disable in order to increase performance. Pipelines? Schedules?
What optimization tips do you have?
Incidentally, BulkUpdateContext() is a very misleading name - as it really improves item creation speed, not item updating speed. But as you also point out, it improves your import speed massively :-)
Since I wrote that post, I've added a few new things to my normal routines when doing imports.
Regularly shrink your databases. They tend to grow large and bulky. To do this; first go to Sitecore Control Panel -> Database and select "Clean Up Database". After this, do a regular ShrinkDB on your SQL server
Disable indexes, especially if importing into the "master" database. For reference, see http://intothecore.cassidy.dk/2010/09/disabling-lucene-indexes.html
Try not to import into "master" however.. you will usually find that imports into "web" is a lot faster, mostly because this database isn't (by default) connected to the HistoryManager or other gadgets
And if you're really adventureous, there's a thing you could try that I'd been considering trying out myself, but never got around to. They might work, but I can't guarantee that they will :-)
Try removing all your field types from App_Config/FieldTypes.config. The theory here is, that this should essentially disable all of Sitecore's special handling of the content of these fields (like updating the LinkDatabase and so on). You would need to manually trigger a rebuild of the LinkDatabase when done with the import, but that's a relatively small price to pay
Hope this helps a bit :-)
I'm guessing you've already hit this, but putting the code inside a SecurityDisabler() block may speed things up also.
I'd be a lot more worried about how Sitecore performs with this much data... assuming you only do the import once, who cares how long that process takes. Is this going to be a regular occurrence?
I'm setting up some Selenium tests for an internal web app and looking for advice on a testing 'best practice'. One of the tests is going to add some data via the UI that cannot be removed via the UI (e.g., you can add a record via the web app, but removing requires contacting someone internally to remove it at the database level). How do you typically account for cleaning up data after the Selenium test is run?
The app in question is written in PHP and I'm using PHP for testing (with Selenium RC and SimpleTest), but I'm open to other tools, etc. as this is just a broad best practice question. The app being tested is in our development environment, so I'm not particularly worried about data carrying over from tests.
Some ideas:
Manually connect to the database in the Selenium test to clean up the data
Use something like DBUnit to manage this?
Just add data and don't worry about cleaning it up (aka, the lazy approach)
Thanks!
Edit: Seems most of the ideas centered around the same conclusion: work off a known set of data and restore when the tests are finished. The mechanism for this probably will vary depending on language, an amount of data, etc. but this looks like it should work for my needs.
I use Selenium with a Rails application, and I use the fixture mechanism to load and unload data from the test database. It's similar to the DbUnit approach, though I don't unload and reload between tests due to the volume of data. (This is something I'm working on, though.)
We have a web front end to a database restore routine. First thing our tests do is restore a "well known" starting point.
Point the webapp to a different database instance that you can wipe when you are done with the tests. Then you will have the database to inspect after the tests have run if you need to debug, and you can just blow away all the tables when you are done. You could get an export of the current database and restore it into your fresh instance before the tests if you need seed data.
Avoid the lazy approach. It's no good and will ultimately fail you. See my previous response on this topic in this separate StackOverflow question.
Agree with the other answers here. I've wired in Selenium and DBUnit tests to the past 3 projects I've worked on. On the first project we tried the lazy approach, but predictably it fell in a heap, so we used DBUnit and I've not looked back.
I realize you are using PHP, so please translate DBUnit/JUnit to your PHP equivalents.
A couple of points:
Use as little data as possible. With
many selenium tests running, you want
the DBUnit load to be as quick as
possible. So try to minimize the
amount of data you are loading.
Only load the data that changes. Often
you can skip tables which are never
changed by the web app. Ref data
tables and so on. However you might
want to create a seperate DBUnit xml
file/db backup to load this data in
case you accidentally lose it.
Let the JUnit selenium tests choose
whether they need a reload. Some Selenium tests will not change any
data, so there is no point reloading
the database after they run. In each of my selenium tests I
override/implement a method to return the desired DBUnit behavior.
#Override
protected DBUnitRunConfig getDBUnitRunConfig() {
return DBUnitRunConfig.RUN_ONCE_FOR_THIS_TEST_CASE;
}
(Couldn't get that snippet to format correctly.) Where DBUnitRunConfig is:
public enum DBUnitRunConfig {
NONE,
RUN_IF_NOT_YET_RUN_IN_ANY_TEST_CASE,
RUN_ONCE_FOR_THIS_TEST_CASE,
RUN_FOR_EACH_TEST_IN_TEST_CASE
};
This cuts down the time required to get through the tests. The Selenium enabled super class (or helper class) can then run, or not run, DBUnit for the given tests.