I've been conducting black box testing for a software program that does engineering analysis on different types of concentrated solar power (CSP) models.
The quality assurance testing I've been conducting has involved only changing one default parameters of the model at a time. It's been very simple where I've just been verifying whether or not the output matches the expected result.
However, now I want to start exploring combinations of configurations that differ from the default set up to investigate if any of these combinations will output an incorrect value or even cause the user interface (UI) to fail.
The first instance I am working with, the UI has (6) different types of parameters I am expected to explore with respect to the default configuration. On top of this, I want to explore combinations of the (6) configuration options by varying them by either increasing (+) or decreasing (-) them, with respect to the default configuration.
As you can imagine, by wanting to explore these (6) parameters in combinations of (+) & (-), I'll easily create many MANY different types of black box tests.
Is there any suggestions on how to conduct these black box tests in a more efficient way? I am trying to avoid conducting all different types of configurations while still exploring the computing code in all aspects.
Related
SUMMARY
The software I am working is an engineering analysis tool for renewable energy systems. It tests power outputs and things of this nature.
My current job task has been to test different configurations of said systems (i.e. use a evaporative cooling condenser instead of the default radiative cooling condenser via a drop down menu on the user interface) to ensure that the release version of this software functions across all spectrums as it should.
So far the process has been very simple to test: changing one factor of the base model and testing if the code functions properly as it should via the outputs generated.
How I need to pivot
But now we want to start testing combinations of tests. This change of workflow will be very tedious and time consuming for we are considering a factorial design approach in order to map out my plan of attack for what configurations to test, to ensure that all the code functions properly when changing multiple things. This could create many different types of configurations I will need to manually test.
All in all, does anyone have any suggestions for an alternative approach? I've been reading up on alternative software testing methods, but am not entirely sure if things like: regression testing, smoke testing, or sanity checks are better for this scenario or not.
Thanks!
EDIT
The software I am testing is being used on Visual Studios where I am utilizing the Google Test framework to manually test my system configurations.
Currently, each test that I create for a certain concentrated solar power system configuration demands that I manually find the difference in code (via WinMerge) between the default configuration (no changes made) to the alternative configuration. I use the code differences in Google Test Framework to simulate what that alternative config. should output by testing it against the accepted output values.
It's only going to get more complicated, with an aspect of manual user interface needed ... or so it seems to me.
How can I automate such a testing suite, when I'm needed to do so much back end work?
As per what I understand, to avoid the manual effort of testing too many combinations, an automation testing tool is needed in this case. If the software that you are testing is browser based, then Selenium is a good candidate. But if the tool is run as an application on Windows or Mac, then some other automation testing tool that supports Win/Mac apps would be needed. The idea is to create test suites with the different combinations and set the expected results for one time. Once the suite is ready, it can be run after any change to the software to verify that all the combinations work as expected without doing any manual work. However there would be an effort involved to create the test suite in the first place and then maintain it if the new scenarios occur or the expected results need to be modified.
It would be a pain to test all the many combinations manually each time, automation testing can surely ease that.
I have two utterances:
SampleQuestion what's the {Currency} {Currencytwo} rate
SampleQuestion what is the {Currency} {Currencytwo} rate
The first one works (what's), while the second one doesn't (what is)
What could be a possible reason?
Voice recognition is something that is very hard to test. What is and is not recognized can vary depending on the person speaking, background noise, etc. There are a few things to try to debug your problem.
In the companion app Alexa often types "what it thought it heard". You might check this to see what Alexa thinks it heard when it didn't recognize something.
You can type specific phrases into the simulator on the development page for your skill. This can test specific renderings. However, because it bypasses the voice recognition layer it is only good for debugging the specifics of your interaction model.
Alexa performs poorly when you have two lots that are not separated by static text. You might consider if you can re-phrase your utterance to have a separating words between the two, or to ask for it as two separate utterances.
If either of your slots are custom slots, you might consider what their content is. Alexa doesn't recognize things one word at a time. It looks at the entire sequence of sounds holistically. It matches each possibility against what it heard, and picks the one with the highest confidence value. Since currencies are often foreign words, that might be perturbing things. Try cutting down your list and see if that improves things.
First the question(s):
How should I write unit tests for a digital filter (band-pass/band-stop) in software? What should I be testing? Is there any sort of canonical test suite for filtering?
How to select test inputs, generate expected outputs, and define "conformance" in a way that I can say the actual output conforms to expected output?
Now the context:
The application I am developing (electromyographic signal acquisition and analysis) needs to use digital filtering, mostly band-pass and band-stop filtering (C#/.Net in Visual Studio).
The previous version of our application has these filters implemented with some legacy code we could use, but we are not sure how mathematically correct it is, since we don't have unit-tests for them.
Besides that we are also evaluating Mathnet.Filtering, but their unit test suite doesn't include subclasses of OnlineFilter yet.
We are not sure how to evaluate one filtering library over the other, and the closest we got is to filter some sine waves to eyeball the differences between them. That is not a good approach regarding unit tests either, which is something we would like to automate (instead of running scripts and evaluating the results elsewhere, even visually).
I imagine a good test suite should test something like?
Linearity and Time-Invariance: how should I write an automated test (with a boolean, "pass or fail" assertion) for that?
Impulse response: feeding an impulse response to the filter, taking its output, and checking if it "conforms to expected", and in that case:
How would I define expected response?
How would I define conformance?
Amplitude response of sinusoidal input;
Amplitude response of step / constant-offset input;
Frequency Response (including Half-Power, Cut-off, Slope, etc.)
I could not be considered an expert in programming or DSP (far from it!) and that's exactly why I am cautious about filters that "seem" to work well. It has been common for us to have clients questioning our filtering algorithms (because they need to publish research where data was captured with our systems), and I would like to have formal proof that the filters are working as expected.
DISCLAIMER: this question was also posted on DSP.StackExchange.com
Could some one help in linking my multivariate testing with goals. I have successfully created A/B Testing and everything working fine apart from the value is always Zero. Is sitecore create value automatically or do we need to set goals for the pages to work? As far as i know we need to set goals for the pages.
I have Followed below sitecore documentation which does not talked anything about how to set values for the tests.
http://sdn.sitecore.net/upload/sitecore6/65/marketing_operations_cookbook_sc65-usletter.pdf
Even if we set the goals for a particular page how sitecore going to recognize whether these goals are accomplished by someone coming to that page directly or from the multivariate testing? I am bit confused.
You're mixing up two concepts here. Sitecore Engagement Value tracking, and goal conversion. From your question I gather; what you're trying to accomplish is to determine which variation of say a banner or a promotion, generates the most clicks?
You can achieve this, but your content editors are going to have to manage how they work with this. In very simple terms, it would be accomplished in this manner:
Set up the M/V test, have each of the variations link to different target pages
On each of the target pages, go to your "Analytics" ribbon, and define "Goals" for the page
Assign a different goal to each target page in this manner
Assign each goal an identical value
With these steps in place, and assuming you have no other tests running, this will produce the result you are looking for.
But the point to all of this is - one needs to fully understand what "Engagement Value" means in the Sitecore CEP, and what it can do for you. It's by long and far more than simply determining the highest conversion rate on any one component.
There are tools out there more tailored to the exact scenario you are looking for.
See my answer here: Clarification on Sitecore A/B Testing Results
And the SBOS Accellerators kit: http://marketplace.sitecore.net/en/Modules/SBOS_Accelerators.aspx
It's A/B Testing or Multivariate Testing ultimate target is to achieve a conversion So Create a Goal with a value lets say 10 otherwise values will be always zero. for any combination of component if the goal is achieved then for that combination value will be 10 it's called conversion. if conversion not happened for any combination values will never increase. after a long duration of test results will show the best possible combination.
Note : Total value never cross the max value of that goal i e 10.
Example for a particular combination say 5 times goal reached is Max 10.
for 1 conversion /5 visits, value is 2. 2 conversion /5 visits value is 4.
I have an app which draws a diagram. The diagram follows a certain schema,
for e.g shape X goes within shape Y, shapes {X, Y} belong to a group P ...
The diagram can get large and complicated (think of a circuit diagram).
What is a good approach for writing unit tests for this app?
Find out where the complexity in your code is.
separate it out from the untestable visual presentation
test it
If you don't have any non-visual complexity, you are not writing a program, you are producing a work of art.
Unless you are using a horribly buggy compiler or something, I'd avoid any tests that boil down to 'test source code does what it says it does'. Any test that's functionally equivalent to:
assertEquals (hash(stripComments(loadSourceCode())), 0x87364fg3234);
can be deleted without loss.
It's hard to write defined unit tests for something visual like this unless you really understand the exact sequence of API calls that are going to be built.
To test something "visual" like this, you have three parts.
A "spike" to get the proper look, scaling, colors and all that. In some cases, this is almost the entire application.
A "manual" test of that creates some final images to be sure they look correct to someone's eye. There's no easy way to test this except by actually looking at the actual output. This is hard to automate.
Mock the graphics components to be sure your application calls the graphics components properly.
When you make changes, you have to run both tests: Are the API calls all correct? and Does that sequence of API calls produce the image that looks right?
You can -- if you want to really burst a brain cell -- try to create a PNG file from your graphics and test to see if the PNG file "looks" right. It's hardly worth the effort.
As you go forward, your requirements may change. In this case, you may have to rewrite the spike first and get things to look right. Then, you can pull out the sequence of API calls to create automated unit tests from the spike.
One can argue that creating the spike violates TDD. However, the spike is designed to create a testable graphics module. You can't easily write the test cases first because the test procedure is "show it to a person". It can't be automated.
You might consider first converting the initial input data into some intermediate format, that you can test. Then you forward that intermediate format to the actual drawing function, which you have to test manually.
For example when you have a program that inputs percentages and outputs a pie chart, then you might have an intermediate format that exactly describes the dimensions and position of each sector.
You've described a data model. The application presumably does something, rather than just sitting there with some data in memory. Write tests which exercise the behaviour of the application and verify the outcome is what is expected.