I'm currently working on a large scale business application and I'm going to use Solver Foundation with Solver Foundation Services to solve a huge portfolio optimization problem with user defined constraints and input. The problem will usually be around 5-10 000 variables and a couple of thousand constraints.
I've started the development using SFS, but I'm having serious trouble unit testing my code. I want to test that the problem I've set up is correct, that all constraints have the correct input and that all parameters are set up correct. But to do that i need to write unit tests againts the SolverContext, and more spesifically, the Parameter objects and the Constraint objects. And these classes are completely sealed up. I can't seem to get any information out of them except their name, expression and index sets.
Is there any way to test the value of a parameter for a given index?
I.e.
var value = myParameter.GetValueFor(anObjectsID);
Assert.That(value, Is.EqualTo(expectedValue);
I can't seem to find any documentation or articles concerning Solver Foundation and unit testing.
Any ideas or comments?
Julian
Related
Assume that we have a utility designed for checking date range validity (for example, start date not greater than end date, and maximum date range) and the utility is wildly used for all query APIs (about 40 APIs in total). The utility will generate 400 Bad Request if date range is invalid. Now the question is:
We write unit tests for the utility so all unit tests for query APIs can assume that date range check works as expected.
versus
Because query APIs are where use cases start, all unit tests for query APIs must include date range test cases to ensure that 400 Bad Request status will occur when date range is invalid.
Which one is proper?
The philosophy behind the unit tests is to test the smallest testable peace of the software. From that point of view the answer is that you should test utility classes in order to create safety net around it and verify that the logic within the utility is valid.
On the other hand it depends also what is your QA strategy. I can imagine that you have QA gates where it is specified that for example business logic layer must be covered up to EX% but repository layer should not be covered at all. This is the valid real-life example where not everything is ideal :).
I would say that both approaches are valid. Let's say the public API is the bare minimum you should cover and ideally you should cover every testable peace of code.
Dynamics AX 2012 comes with unit testing support.
To have meaningful tests some test data needs to be provided (stored in tables in the database).
To get a reproducable outcome of the unit tests we need to have the same data stored in the tables every time the tests are run. Now the question is, how can we accomplish this?
I learned that there is the possibility of setting the isolation level for the TestSuite to SysTestSuiteCompanyIsolateClass. This will create an empty company and delete the company after the tests have been run. In the setup() method I can fill my testdata into the tables with insert statements. This works fine for small scenarios but becomes cumbersome very fast if you have a real life project.
I was wondering if there is anyone out there with a practical solution of how to use the X++ Unit Test Framework in a real world scenario. Any input is very much appreciated.
I agree that creating test data in a new and empty company only works for fairly trivial scenarios or scenarios where you implemented the whole data structure yourself. But as soon as existing data structures are needed, this approach can become very time consuming.
One approach that worked well for me in the past is to run unit tests in a existing company that already has most of the configuration data (e.g. financial setup, inventory setup, ...) needed to run the test. The test itself runs in a ttsBegin - ttsAbort block so that the unit test does not actually create any data.
Another approach is to implement data provider methods that are test agnostic, but create data that is often used in unit tests (e.g. a method that creates a product). It takes some time to create a useful set of data provider methods, but once they exist, writing unit tests becomes a lot faster. See SysTest part V.: Test execution (results, runners and listeners) on how Microsoft uses a similar approach (or at least they used to back in 2007 for AX 4.0).
Both approaches can also be combined, you would call the data provider methods inside the ttsBegin - ttsAbort block to create the needed data only for the unit test.
Another useful method is to use doInsert or doUpdate to create your test data, especially if you are only interested in a few fields and do not need to create a completely valid record.
I think that the unit test framework was an afterthought. In order to really use it, Microsoft would have needed to provide unit test classes, then when you customize their code, you also customize their unit tests.
So without that, you're essentially left coding unit tests that try and encompass base code along with your modifications, which is a huge task.
Where I think you can actually use it is around isolated customizations that perform some function, and aren't heavily built on base code. And also with customizations that are integrations with external systems.
Well, from my point of view, you will not be able to leverage more than what you pointed from the standard framework.
What you can do is more around release management. You can setup an integration environment with the targeted data and push your nightbuild model into this environmnet at the end of the build process and then run your tests.
Yes, it will need more effort to set it up and to maintain but it's the only solution I've seen untill now to have a large and consistent set of data to run unit or integration tests on.
To have meaningful tests some test data needs to be provided (stored
in tables in the database).
As someone else already indicated - I found it best to leverage an existing company for data. In my case, several existing companies.
To get a reproducable outcome of the unit tests we need to have the
same data stored in the tables every time the tests are run. Now the
question is, how can we accomplish this?
We have built test helpers, that help us "run the test", automating what a person would do - give you have architeced your application to be testable. In essence our test class uses the helpers to run the test, then provides most of the value in validating the data it created.
I learned that there is the possibility of setting the isolation level
for the TestSuite to SysTestSuiteCompanyIsolateClass. This will create
an empty company and delete the company after the tests have been run.
In the setup() method I can fill my testdata into the tables with
insert statements. This works fine for small scenarios but becomes
cumbersome very fast if you have a real life project.
I did not find this practical in our situation, so we haven't leveraged it.
I was wondering if there is anyone out there with a practical solution
of how to use the X++ Unit Test Framework in a real world scenario.
Any input is very much appreciated.
We've been using the testing framework as stated above and it has been working for us. the key is to find the correct scenarios to test, also provides a good foundation for writing testable classes.
I want to write an algorithm (a bunch of machine learning algorithms) in C/C++ or maybe in Java, possibly in Python. The language doesn't really matter to me - I'm familiar with all of the above.
What matters to me is the testing. I want to train my models using training data. So I have the test input and I know what the output should be and I compare it to the model's output. What kind of test is this? Is it a unit test? How do I approach the problem? I can see that I can write some code to check what I need checking but I want to separate testing from main code. Testing is a well developed field and I've seen this done before but I don't know the name and type of this particular kind of testing so that I can read up on it and not create a mess. I'd be grateful if you could let me know what this testing method is called.
Your best bet is watch the psychology of testing videos from the tetsing God http://misko.hevery.com/
Link of Misko videos:
http://misko.hevery.com/presentations/
And read this Google testing guide http://misko.hevery.com/code-reviewers-guide/
Edited:
Anyone can write tests, they are really simple and there is no magic to write a test, you can simply do something like:
var sut = new MyObject();
var res = sut.IsValid();
if(res != true)
{
throw new ApplicationException("message");
}
That is the theory of course these days we have tools to simplify the tests and we can write something like this:
new MyObject().IsValid().Should().BeTrue();
But what you should do is focus on writing testable code, that's the magic key
Just follow the psychology of testing videos from Misko to get you started
This sounds a lot like Test-Driven Development (TDD), where you create unit-tests ahead of the production code. There are many detailed answers around this site on both topics. I've linked to a couple of pertinent questions to get you started.
If your inputs/outputs are at the external interfaces of your full program, that's black box system testing. If you are going inside your program to zoom in on a particular function, e.g., a search function, providing inputs directly into the function and observing the behavior, that's unit testing. This could be done at function level and/or module level.
If you're writing a machine learning project, the testing and training process isn't really Test-Driven Development. Have you ever heard of co-evolution? You have a set puzzles for your learning system that are, themselves, evolving. Their fitness is determined by how much they confound your cases.
For example, I want to evolve a sorting network. My learning system is the programs that produce networks. My co-evolution system generates inputs that are difficult to sort. The sorting networks are rewarded for producing correct sorts and the co-evolutionary systems are rewarded for how many failures they trigger in the sorting networks.
I've done this with genetic programming projects and it worked quite well.
Probably back testing, which means you have some historical inputs and run your algorithm over them to evaluate the performance of your algorithm. The term you used yourself - training data - is more general and you could search for that to find some useful links.
Its Unit testing. the controllers are tested and the code is checked in and out without really messing up your development code. This process is also called a Test Driven Development(TDD) where your every development cycle is tested before going into the next software iteration or phase.
Although this is a very old post, my 2 cents :)
Once you've decided which algorithmic method to use (your "evaluation protocol", so to say) and tested your algorithm on unitary edge cases, you might be interested in ways to run your algorithm on several datasets and assert that the results are above a certain threshold (individually, or on average, etc.)
This tutorial explains how to do it within the pytest framework, that is the most popular testing framework within python. It is based on an example (comparing polynomial fitting algorithms on several datasets).
(I'm the author, feel free to provide feedback on the github page!)
I'm a controls developer and a relative newbie to unit testing. Almost daily, I fight the attitude that you cannot test controls because of the UI interaction. I'm producing a demonstration control to show that it's possible to dramatically reduce manual testing if the control is designed to be testable. Currently I've got 50% logic coverage, but I think I could bump that up to 75% or higher if I could find a way to test some of the more complicated parts.
For example, I have a class with properties that describe the control's state and a method that generates a WPF PathGeometry object made of several segments. The implementation looks something like this:
internal PathGeometry CreateOuterGeometry()
{
double arcRadius = OuterCoordinates.Radius;
double sweepAngle = OuterCoordinates.SweepAngle;
ArcSegment outerArc = new ArcSegment(...);
LineSegment arcEndToCenter = new LineSegment(...);
PathFigure fig = new PathFigure();
// configure figure and add segments...
PathGeometry outerGeometry = new PathGeometry();
outerGeometry.Figures.Add(fig);
return outerGeometry;
}
I've got a few other methods like this that account for a few hundred blocks of uncovered code, an extra 25% coverage. I originally planned to test these methods, but rejected the notion. I'm still a unit testing newbie, and the only way I could think of to test the code would be several methods like this:
void CreateOuterGeometry_AngleIsSmall_ArcSegmentIsCorrect()
{
ClassUnderTest classUnderTest = new ClassUnderTest();
// configure the class under test...
ArcSegment expectedArc = // generate expected Arc...
PathGeometry geometry = classUnderTest.CreateOuterGeometry()
ArcSegment arc = geometry.Figures.Segments[0];
Assert.AreEqual(expectedArc, arc)
}
The test itself looks fine; I'd write one for each expected segment. But I had some problems:
Do I need tests to verify "Is the first segment an ArcSegment?" In theory the test tests this, but shouldn't each test only test one thing? This sounds like two things.
The control has at least six cases for calculation and four edge cases; this means for each method I need at least ten tests.
During development I changed how the various geometries were generated several times. This would cause me to have to rewrite all of the tests.
The first problem gave me pause because it seemed like it might inflate the number of tests. I thought I might have to test things like "Were there x segments?" and "Is segment n the right type?", but now that I've thought more I see that there's no branching logic in the method so I only need to do those tests once. The second problem made me more confident that there would be much effort associated with the test. It seems unavoidable. The third problem compounds the first two. Every time I changed the way the geometry was calculated, I'd have to edit an estimated 40 tests to make them respect the new logic. This would also include adding or removing tests if segments were added or removed.
Because of these three problems, I opted to write an application and manual test plan that puts the control in all of the interesting states and asks the user to verify it looks a particular way. Was this wrong? Am I overestimating the effort involved with writing the unit tests? Is there an alternative way to test this that might be easier? (I'm currently studying mocks and stubs; it seems like it'd require some refactoring of the design and end up being approximately as much effort.)
Use dependency injection and mocks.
Create interfaces for ArcSegmentFactory, LineSegmentFactory, etc., and pass a mock factory to your class. This way, you'll isolate the logic that is specific to this object (this should make testing easier), and won't be depending on the logic of your other objects.
About what to test: you should test what's important. You probably have a timeline in which you want to have things done, and you probably won't be able to test every single thing. Prioritize stuff you need to test, and test in order of priority (considering how much time it will take to test). Also, when you've already made some tests, it gets much easier to create new tests for other stuff, and I don't really see a problem in creating multiple tests for the same class...
About the changes, that's what tests are for: allowing you to change and don't really fear your change will bring chaos to the world.
You might try writing a control generation tool that generates random control graphs, and test those. This might yield some data points that you might not have thought of.
In our project, we use JUnit to perform tests which are not, strictly speaking, unit tests. We find, for example, that it's helpful to hook up a blank database and compare an automatic schema generated by Hibernate (an Object-Relational Mapping tool) to the actual schema for our test database; this helps us catch a lot of issues with wrong database mappings. But in general... you should only be testing one method, on one class, in a given test method. That doesn't mean you can't do multiple assertions against it to examine various properties of the object.
My approach is to convert the graph into a string (one segment per line) and compare this string to an expected result.
If you change something in your code, tests will start to fail but all you need to do is to check that the failures are in the right places. Your IDE should offer a side-by-side diff for this.
When you're confident that the new output is correct, just copy it over the old expected result. This will make sure that a mistake won't go unnoticed (at least not for long), the tests will still be simple and they are quick to fix.
Next, if you have common path parts, then you can put them into individual strings and build the expected result of a test from those parts. This allows you to avoid repeating yourself (and if the common part changes, you just have to update a single place for all tests).
If I understand your example correctly, you were trying to find a way to test whether a whole bunch of draw operations produce a given result.
Instead of human eyes, you could have produced a set of expected images (a snapshot of verified "good" images), and created unit tests which use the draw operations to create the same set of images and compare the result with an image comparison. This would allow you to automate the testing of the graphic operations, which is what I understand your problem to be.
The textbook way to do this would be to move all the business logic to libraries or controllers which are called by a 1 line method in the GUI. That way you can unit test the controller or library without dealing with the GUI.
I have a simple project, mostly consisting of back-end service code. I have this fully unit-tested, including my DAL layer...
Now I have to write the front-end. I re-use what business objects I can in my front-end, and at one point I have a grid that renders some output. I have my DAL object with some function called DisplayRecords(id) which displays the records for a given ID...
All of this DAL objects are unit tested. But is it worth it to write a unit test for the DisplayRecords() function? This function is calling a stored proc, which is doing some joins. This means that my unit-test would have to set-up multiple tables, one with 15 columns, and its return value is a DataSet (this is the only function in my DAL that returns a datset - because it wasnt worth it to create an object just for this one grid)...
Is stuff like this even worth testing? What about front-end logic in general - do people tend to skip unit tests for the ASP.NET front-end, similar to how people 'skip' the logic for private functions? I know the latter is a bit different - testing behavior vs implementation and all... but, am just curious what the general rule-of-thumb is?
Thanks very much
There are a few things that weigh into whether you should write tests:
It's all about confidence. You build tests so that you have confidence to make changes. Can you confidently make changes without tests?
How important is this code to the consumers of the application? If this is critical and central to everything, test it.
How embarrassing is it if you have regressions? On my last project, my goal was no regressions-- I didn't want the client to have to report the same bug twice. So every important bug got a test to reproduce it before it was fixed.
How hard is it to write the test? There are many tools that can help ease the pain:
Selenium is well understood and straightforward to set up. Can be a little expensive to maintain a large test suite in selenium. You'll need the fixture data for this to work.
Use a mock to stub out your DAL call, assuming its tested elsewhere. That way you can save time creating all the fixture data. This is a common pattern in testing Java/Spring controllers.
Break the code down in other ways simply so that it can be tested. For example, extract out the code that formats a specific grid cell, and write unit tests around that, independent of the view code or real data.
I tend to make quick Selenium tests and just sit and watch the app do its thing - that's a fast validation method which avoids all the manual clicking.
Fully automated UI testing is tedious and should IMO only be done in more mature apps where the UI won't change much. Regarding the 'in-between' code, I would test it if it is reused and/or complicated/ introducing new logic, but if its just more or less a new sequence of DAL method calls and specific to a single view I would skip it.