Sitecore 8.1 Multivariate testing optimization shows no test status - sitecore

I have set up a Multivariate test in Sitecore 8.1. In Experience Editor i have a notification,it says item has a active test and Optimization tab shows red button but status shows no tests. I did check in the analytic database no test record found. I did deploy the Test from Workflow and its running. I am getting all options to stop, cancel and suspend. Sitecore is not testing my variations can any one help me to resolve this?

From what you are describing it sounds like the Sitecore_testing_index is erroneous. Perhaps the index is invalid, not built or not present. This index is required for Sitecore's Experience Optimisation to find the Tests in their various states.
I recommend getting a default copy of Sitecore.ContentTesting.Solr.IndexConfiguration.config from a fresh install and copy it to the path App_Config/Include/ContentTesting.
Once that is done, rebuild the Index via the Index Manager. You should then be able to find tests in the Experience Optimisation.
If you're still having trouble I recommend replacing the entire contents of App_Config/Include/ContentTesting with the files from a fresh install. And re-indexing.
I also recommend having a look against the Item which has the Multivariate test it should be within the default field Page Level Test Set. It should have values something along the lines of this
If it is populated but you are still unable to the Test, follow this guide how to create Multivariate Tests in Sitecore to identify where something is going wrong.

Related

store results of automatic tests and show results in a web UI

I'm looking for a piece (or a set) of software that allows to store the outcome (ok/failed) of an automatic test and additional information (the test protocol to see the exact reason for a failure and the device state at the end of a test run as a compressed archive). The results should be accessible via a web UI.
I don't need fancy pie charts or colored graphs. A simple table is enough. However, the user should be able to filter for specific test runs and/or specific tests. The test runs should have a sane name (like the version of the software that was tested, not just some number).
Currently the build system includes unit tests based on cmake/ctest whose results should be included. Furthermore, integration testing will be done in the future, where the actual tests will run on embedded hardware controlled via network by a shell script or similar. The format of the test results is therefore flexible and could be something like subunit or TAP, if that helps.
I have played around with Jenkins, which is said to be great for automatic tests, but the plugins I tried to make that work don't seem to interact well. To be specific: the test results analyzer plugin doesn't show tests imported with the TAP plugin, and the names of the test runs are just a meaningless build number, although I used the Job Name Setter plugin to set a sensible job name. The filtering options are limited, too.
My somewhat uneducated guess is that I'll stumple about similar issues if I try other random tools of the same class like Jenkins.
Is anyone aware of a solution for my described testing scenario? Lightweight/open source software is preferred.

How to link regression unit tests with issue tracker?

I am working on an existing web application (written in .NET) which, surprisingly, has a few bugs in it. We track outstanding in a bug tracker (JIRA, in this case), and have some test libraries in place already (written in NUnit).
What I would like to be able to do is, when closing an issue, to be able to link that issue to the unit-/integration-test that ensures that a regression does not occur, and I would like to be able to expose that information as easily as possible.
There are a few things I can think of off-hand, that can be used in different combinations, depending on how far I want to go:
copy the URL of the issue and paste it as a comment in the test code;
add a Category attribute to the test and name it Regressions, so I can select regression tests explicitly and run them as a group (but how to automatically report on which issues have failed regression testing?);
make the issue number part of the test case name;
create a custom Regression attribute that takes the URI of the issue as a required parameter;
create a new custom field in the issue tracker to store the name (or path) of the regression test(s);
The ideal scenario for me would be that I can look at the issue tracker and see which issues have been closed with regression tests in place (a gold star to that developer!), and to look at the test reports and see which issues are failing regression tests.
Has anyone come across, or come up with, a good solution to this?
i fail to see what makes regression tests different from any other tests. why would you want to run only regression tests or everything except regression tests. if regression or non-regression test fails that means this specific functionality is not working and product owner has to decide how critical the problem is. if you stop differentiate tests then simply do code reviews and don't allow any commits without tests.
in case we want to see what tests have been added for specific issues we go to issue tracking system. there are all commits, ideally only one (thanks squashing), issue tracker is connected to git so we can easily browse changed files
in case we want to see (for whatever reason) what issue is related to some specific test/line of code we just give tests a meaningful business name which helps finding any related information. if problem is more technical and we know may need specific issue then we simply add issue number in comment. in case you want to automate retrieval just standardize the format of the comment
another helpful technique is to structure your program correctly. if every functionality has it's own package and tests are packaged and named meaningfully then it's also easy ro find any related code

Android unit testing framework that can interact with Google Maps API. Does one exist?

I am working with Google Maps on Android 4.0 and I would like to know:
Is there a testing framework (or anything..) that can interact with the Google Maps API? What are the possibilities when interacting/testing with Google Maps on Android? Is it not possible to find all pins on the map and perform a click on them or perhaps determine zoom level?
There seem to be a couple questions on this website dealing with similar issues/questions with no answer.
I realize "interact with" is a broad term. I am really looking for any kind of help whether it is a suggestion or just to tell me something is not possible.
I have been using JUnit but it seems limited by itself. Just today I started looking at Robotium since the majority of tests I need to do are UI based. I am new to Robotium so maybe it is possible with this and I have not discovered it yet?
When our team created an Android application that used Google maps, I was able to create tests using the R.id for the map fragment. It is important to note that I had access to the Application's code, so I knew what variables to look for.
Prior to creating this test, I wanted to make sure that I was targeting the correct R.id, so I went into the R file, copied the value and placed it in
assertEquals(id.satellite,copiedRValue);
which returned true, and then build the test around the changeable variable
initialID = solo.waitForFragmentById(id.satellite); //where id.satellite is defined in the R file, eventually would time out and throw error if fragment was not present
//save the map type, leave
//and do other awsome stuff
//before coming back to the map
finalID = solo.waitForFragmentById(id.satellite);//capture the map fragment that is now displayed, again would time out if this specific fragment was not visible
//assert that the fragments are equal
assertTrue(initialID.equals(finalID);
The biggest problem that I had with this test and with other Robotium tests is that I had click events happening to bring in menus and sometimes Robotium would not perform the click, and the test would fail on the click.
This was my first go at testing with Robotium, so there might be other ways to manipulate the R.id values to create a tighter test.
In case anyone wants to know after much searching I finally found something that can test Google Maps. Things such as zoom level and I believe tap pin (method is called tapMapMarkerItem()) are supported. I have not tested the pin tap yet tho.
Apparently the awesome Robotium does not support map testing by itself. Nicholas Albion was nice enough to create an extension to provides testing support for maps on Android. Thank you so much Nicholas!
So here it is:
1. Download the Robotium jars from robotium.org (I found this helpful http://www.vogella.com/articles/AndroidTesting/article.html - by Lars Vogel)
2. Download the extension from https://github.com/nalbion/robotium-maps

Is there a good way to get a Code Coverage Report for Apex Code?

We're developing alot of enhancements to salesforce using Visualforce and Apex as part of a larger system, as part of our quality metrics we have to provide a report to management on our Code Coverage.
I'd like to get a report similar to the one produced by Run All Tests in the Force.com IDE but in HTML so I can display it easily via a web interface.
For the rest of our system we use Sonar http://www.sonarsource.org/ to produce the reports.
Does anybody know the best approach to this?
I've explored the API documentation but am unable to find out if the Coverage Percentages is stored against the classes so querying that isn't an option.
Any help or pointers would be greatly appreciated.
If you run the Apex tests yourself via the API there are objects returned indicating which lines hasn't been covered by the tests in that run. You can run the tests via either the synchronous or asynchronous methods.
Then you can use the data to create a report in a format that you require. For example, I've used it to create a basic report in the FuseIT SFDC Explorer (Windows based and free). I'm just dumping out line ranges that aren't covered.
You will probably want to run all the tests in one run to get the complete code coverage of all the tests. For example, in the screenshot above I only ran one out of a much greater number of test classes. As a result it looks like the code coverage was much lower than the cumulative tests give. It does however show which lines an individual test class reaches.
I've also heard good things about MavensMate for the Sublime Text Editor. Being open source you should be able to find how it integrates with the testing api and then generates the reports.

WatiN - what to test?

I have been writing a lot of unit tests for the code I write. I've just started to work on a web project and I have read that WatiN is a good test framework for the web.
However, I'm not exactly sure what I should be testing. Since most of the web pages I'm working on are dynamic user generated reports, do I just check to see if a specific phrase is on the page?
Besides just checking if text exists on a page, what else should I be testing for?
First think of what business cases you’re trying to validate. Ashley’s thoughts are a good starting point.
You mentioned most pages are dynamically generated user reports. I’ve done testing around these sorts of things and always start by figuring out what sort of baseline dataset I need to create and load. That helps me ensure I can get exactly the appropriate set of records back in the reports that I expect if everything's working properly. From there I’ll write automation tests to check I get the right number of records, the right starting and ending records, records containing the right data, etc.
If the reports are dynamic then I’ll also check whether filtering works properly, that sorting behaves as expected, etc.
Something to keep in mind is to keep a close eye on the value of these tests. It may be that simply automating a few tests around main business use cases may be good enough for you. Handle the rest manually via exploratory testing.
You essentially want to be testing as if you are a user entering your site for the first time. You want to make sure that every aspect of your page is running exaclty the way you want it to. For example, if there is a signup/login screen, automate those to ensure that they are both working properly. Automate the navigation of various pages, using Assertions just to ensure the page loaded. If there are generated reports, automate all generations and check the text on the generations to ensure it is what you specified by the "user" (you). If you have any logic saying for example when you check this box all other boxes should check aswell. There are many assertions that can be applied, I am not sure what Unit-Testing software you are using but most have a very rich assortment.