I couldn't find the answer I'm looking for so I decided to ask more experienced testers. I've implemented POM to my automation tests. I have A few different objects representing different sections of the website. I consider if I should create sepearate test case for each object ? I mean create separate .py files and import the libraries all over again or just import the objects to the one .py file which respresents the test ? Which approach will be more approporiate ?
As every pattern, which should be a reusable solution - there is a specific context in play, one that makes it reasonable to utilize such pattern. I guess you've considered all considerations before jumping into POMs right away:
Built Page Objects are very often hard to maintain and to use. Very well thought design is needed in grouping of elements into headers and footers, or identified widgets, there shouldn't just be a big list of stuff, names should be readable as well, enough to explain what they were for
It might limit your design, e.g. you staring to ignore better abstractions.
Not enough flexibility, especially for refactoring (both structure and implementation).
POM is wrong by design, as it clearly violates SRP, by keeping elements map and the actions upon them.
As to
if I should create sepearate test case for each object ? I mean create separate .py files and import the libraries all over again or just import the objects to the one .py file which respresents the test ?
Your Test harness, should keep three separate concerns:
test execution engine (core logic)
the test scripts
test data
So, simply put - don't mess scripts with POMs.
Here is a Github repo dedicated to Automation in testing and design patterns. Fell free to use it!
Related
I have a web application which currently exists as a single large class containing a front controller, and a bootstrap file which runs the class and passes in the settings.
Over time, the class has become over-large, with multiple concerns, that I have long wanted to refactor into what will be rather obvious subclasses. I'm very familiar with all kinds of refactorings.
However, for this application, at present there is no test coverage. Because all interactions are done by reading GET and POST parameters, there is only one public method into the system, i.e. such URL interactions through the front controller entry point. So the class is hard to unit test (I will be using PHPUnit) as it stands.
Obviously it is far safer to refactor with tests already in place. So I would welcome views on which strategy is best:
1) Create a pile of tests that implement GET and POST interactions as per a user using the web application; or
2) Create a pile of tests against the private functions by using the ReflectionClass workaround, and convert these to standard PHPUnit tests concurrently with the refactoring; or
3) Add unit testing after doing the subclass refactoring, testing the public entry API points to the new subclasses.
Either way how you approach it, I would do it one-functionality at a time if you can, it makes the task less daunting and therefore more achievable...
When confronted with such situations in the past (quite a few times in fact), I usually do this:
First of all, does the application use external dependencies like databases? The first thing is that we need to make these predictable. I usually try to find where the creation of these external dependencies occur and then delegate it to a good Dependency Injection framework (like Pimple).
Once that is done you can add rules to the DI factory methods that return test databases depending on some conditions set: I usually use a test domain for this, so if the site is foo.local then i also have a domain test.foo.local that sets (in the Apache config for example) an environment variable APPLICATION_ENV set to 'tests'. The DI container can then inspect this environment variable to serve the right dependency, in our case a connection pointing to a test database.
(It don't like that production code is aware of that it might be in a test scenario, but it's a lesser evil, and it can be alleviated by using config files that have the environment variable in the filename for example. I digress.)
When this is done, in my tests I then have a reference to the same test database which I then use to set up some predictable test data with some thing like DbUnit or test-db-acle (disclaimer, I wrote the latter one, so I tend towards that).
Once the test data is in place, I issue a request to the url in question with something like Guzzle (in various important variations) and record the output(s), which I then use to make a test that sets the behaviour 'in stone'.
NOTE: this is a very imprecise method, the best way for tests is in a test-driven way, but this is not a solution in this case obviously.
However, if you are reasonable sure that some manual testing has been done on this, then this is a good way to at least get a few smoke tests into place. In my experience the value of having even one rudimentary smoke test in place is infinitely more valuable than not having any.
Now that you have a smoke test in place, you can refactor this part of the application and move to a new one.
I hope this help....
I've an app which is growing pretty big, doing too much thing as a single app, so I'd like to split it in 2 or 3 "sub-apps"
The problem is that there are a dozen of models which are linked to each other (foreing key, manytomanyfields, etc.)
I've read LOTS of times that apps should be self-consistent, so, are there any best practices to split a big app in several ones linked to each other?
--> how bad is importing models from other apps?
I didn't hear about a best practice solution, but here's what I would usually do, and I split apps a lot:
Step 0 - When is an app "too big"?:
An app should be an (independent) logical unit. Independent is actually misleading, of course you can have dependencies like django.conrib.auth, what you should have tho are cross dependencies. They will eventually lead to looping imports. That being said, you app can grow quite large, with is totally fine.
If you having problems organizing your code, I may remind you of the fact that every module can be build as a package. You simply split your models.py into models/__init__.py and models/LOGICAL_UNITS.py.
The only reason why you should split an app is because you can, not because you want to ;)
Step 1 - Overview
Use django_extensions' graph printing capabilities.
This should give you a good overview an might help you to find so called "communities". Groups of models that have strong cross dependencies.
Those communities usually make a pretty good app.
Step 2 - Naming:
If you cant find a name for you're new application, it probably isn't one.
I want to write first unit test in my life.
At present, I am developing new ASP.NET MVC 5 project. This is simple workflow system. My project contains 4 layers:
Presentaton layer (an MVC-project)
Infrastructure layer (which contains Repositories and ORM)
Domain (which contains POCO classes and interfaces of busines logic)
Service layer (which is implemented domain interfaces)
I believe, that I need test Service layer firstly. Is that right? Which layer should I test first?
There's no single correct approach, but the most common techniques are
Top-Down, also called Outside-In. Here, you start at the outside layer and work your way in.
Bottom-Up. Here, you start with the constituent building blocks and assembly them to a working system.
As Code Complete describes, using dual approaches interchangeably can actually be beneficial, because the stuff you learn from doing one thing, helps you better understand what you need to do in the other end, and vice versa. I often do a bit of Outside-In, then some Bottom-Up, then some more Outside-In, etc.
As per Mark Seemann's answer, you can test from UI layer first and finish with the data layer, or in reverse.
Who is responsible for the project? Which part of the project is business critical? Rather than test across each layer "horizontally", test through all of the layers for a particular piece of functionality "vertically".
This gives you the benefit of coverage based on business priorities and you can apply any testability changes you need to make or techniques across all of the layers as you start to test each piece of functionality.
Since you have written your code already, be prepared to refactor some code to make it more testable (for example setting up Dependency Injection to isolate code for unit testing) and make note of these changes to help design for testability in future.
I am new to Joomla, started learning it just a day ago and didn't manage to find an answer to my question in the docs (which suck real bad compared to Drupal).
So what I want to do is override the whole module in a template. The documentation only suggests I can override the markup of a module by placing corresponding files in the html folder, but I have to make some corrections to the actual logic. Is copying the module, changing and then installing it as a separate entity the only way to go? I mean it makes sense that "template" folder is for "views" but with the kind of application I have to develop it is gonna be annoying...
Yeah, you can only override views.
If you want to override logic, you have 2 options:
Change the actual logic in-place, which leads to problems on updating etc
Duplicate the module and change the logic, as you suggested
One other way to consider is to replicate or fix the logic in the template. While this is not a very slick way of doing it, it is faster, especially than duplicating a whole component.
Note that you can also add your own libraries to the Joomla libraries folder to centralize your own code.
Further, if you manage your code with (for example) svn, you should not have any problems on upgrades with creating new views that may include their own logic.
We are developing applications for use within AutoCAD.
Basically we create a Class Library Project, and load the .dll in AutoCAD with a command (NETLOAD).
As so, we can use commands, "palettes", user controls, forms etc...
AutoDesk provides an API through some dll's, running in their program directory.
When referencing these dll's you can only call the dll's at runtime while loading your app in AutoCAD (This is a licensing security from AutoDesk).
For us, while developing, this is not a problem, we need to visually test within the context of AutoCAD, so we just set the Debug Properties so that they start acad.exe and load our dll with a script in the acad.exe parameters.
The problem is, when trying to unit test our code, NUnit or mstest are not running from within the AutoCAD context and they also cannot start it.
There exist a tool called Gallio, which has provided an interface with AutoCAD, so that it can run Unit test through IPC with Named Pipes.
However, this solution is, for me, too much of a hassle. I want to be able to quickly write tests without having to leave my beloved IDE.
So, what, from a "good design view" would be a good approach to this problem? I'm thinking I would basically need a testable codebase which is not referencing the AutoCAD dll's and a non-testable that does references the untestable AutoCAD dll's.
I'm sure there are ways to get this to work: ( IOC, DI, Adapter Pattern,. .) I just don't these principles in depth and thus I don't know which route will best suit my purposes and goals.
The first step is to triage your code for parts which need AutoCAD and parts which are really independent. Create unit tests for the independent parts as you usually would.
For the other parts, you need mockups which behave like AutoCAD. Make them as simple as possible (for example, just return the correct answers in the methods without doing any calculations). Now, you need several sets of classes:
A set of interfaces which your code uses to achieve something (for example, load a drawing).
A set of implementations for said set of interfaces which call the AutoCAD dlls.
A set of classes which try the implementations within the context of AutoCAD. Just create a small UI with a couple of buttons where you can run this code. It is used to reassure yourself that your mockups do the right thing. Log method parameters and results to some file so you can try how AutoCAD responds. If a mockup breaks, you can use this code to verify what AutoCAD is doing and you can use it as a reference when developing the mockups.
When you know how AutoCAD responds, create the mockups. In your tests, create them with the desired results (and errors, so you can test error handling, too). So when you have boolean loadDrawing(File filename), create a mockup which returns true for the filename exists.dxf and false for anything else.
Use a factory or DI to tell your application code which implementation to use. I tend to have a big global config class with a lot of public fields where I simply store the objects to use. I can set this up in the beginning, it's fast, it's easy to understand. If you need to create objects at runtime, then put factories in the config class which generate the objects for you, so you can swap them out.
I wrote ... and later broke ... a Test runner for AutoCAD. It is at https://github.com/CADbloke/CADtest. If you're interested in it nudge me along and I'll fix it faster. I am waiting for NUnit v3 release before I tackle it.
If you reset to the 3rd commit in that repo (I think) and fiddle with it from there it should run.