I'm currently in the process of increasing code coverage on our software products and have ran into an issue; all of my unit tests (when compiled using 'Any CPU') are failing due to throwing a 'BadImageFormatException'.
This exception can be circumvented by building the solution using 'x86' instead of 'Any CPU', however the requirements are such that we need to be able to run them using Any CPU/x64 bit.
All unit tests involving Moq follow pretty much the same format:
[TestMethod]
public void GetProduct_ValidId_ProductReturned()
{
//Setting up the object
Product prod = new Product();
prod.ID = 7;
prod.Name = "Test";
//Create the mocks
var mockProductRepo = new Mock<IRepository<Product>>();
var testDb = new Mock<IUnitOfWork>();
//Setup what the repo needs to return, in this case it's a Product
mockProductRepo.Setup(m => m.getByID(7)).Returns(prod);
//Setup what the database needs to return, in this case it's our repo which we've already setup
testDb.SetupGet(m => m.ProductRepo).Returns(mockProductRepo.Object);
//Run the test
Product returnedProd = ProductHelper.getProduct(testDb.Object, 7);
Assert.IsNotNull(returnedProd);
}
I can confirm that this test is successful when it is built using x86. Does anyone have any ideas on how I can get Moq to play nice when built using 'Any CPU'?
As an aside I can also confirm that all my projects in the solution are build using the same value ('Any CPU'). I'm using Moq v4.0.
EDIT: Here is the full exception: Test method [ProductName and the method called] threw exception:
System.BadImageFormatException: Could not load file or assembly '[Product name], Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format.
Ok so after some digging I finally found out what the issue was. Even if you select 'Build' and then 'Configuration Manager' from the toolbar and see that Platform is set to 'Any CPU' (as was my case), what I hadn't done was check the Target Platform in the project.
To check the Target Platform you need to do the following:
Right-click your project and select 'Properties'
Select the 'Build' tab on the left
Ensure that the Target Platform of your test project matches that of the project you are testing
In my case my test was targeting 'Any CPU' but my live project was targeting 'x64'. This is what was causing the issue.
This can be caused by missing project or other assembly references. Try making sure you have project references for all the projects in your solution.
This post has a further example.
Related
I have a multi-module android project. I have a bunch of unit tests in each module and I have always been able to run them all at once using a run configuration like this one:
Many of my tests use a base class that runs with RobolectricTestRunner. This base class looks like this:
#RunWith(RobolectricTestRunner::class)
#Config(application = AndroidTest.ApplicationStub::class,
manifest = Config.NONE,
sdk = [21])
abstract class AndroidTest {
#Suppress("LeakingThis")
#Rule #JvmField val injectMocks = InjectMocksRule.create(this#AndroidTest)
fun application(): Application = ApplicationProvider.getApplicationContext()
internal class ApplicationStub : Application()
}
**When running these tests using the above config, I get the error **
[Robolectric] NOTICE: legacy resources mode is deprecated; see http://robolectric.org/migrating/#migrating-to-40
This makes many of my tests fail with ResourceNotFoundException
However, when I run tests only in a specific module, everything passes. This is because Robolectric now uses Binary resources:
[Robolectric] sdk=21; resources=BINARY
I have followed the migration instructions in build.gradle files for each module, having added the following in each android block:
testOptions {
unitTests {
includeAndroidResources = true
returnDefaultValues = true
}
}
One clue I have found but have been unable to fix is this when I run the ALL UNIT TEST task:
WARNING: No manifest file found at build/intermediates/merged_manifests/debug/../../library_manifest/debug/AndroidManifest.xml.
Falling back to the Android OS resources only.
No such manifest file: build/intermediates/merged_manifests/debug/../../library_manifest/debug/AndroidManifest.xml
To remove this warning, annotate your test class with #Config(manifest=Config.NONE).
I have tried, as you have seen, to add the manifest=Config.NONE, which had no effect (and is now deprecated anyway).
Edit: Also tried android.enableUnitTestBinaryResources = true in settings.gradle, but this prevents the app from building due to it being a deprecated flag in the current gradle tools.
Thanks for any help provided!
So with the default unit test run platform being changed to Gradle in Android Studio, I managed to figure out a way to run unit tests in multiple modules all at once, circumventing the Robolectric bug.
First, go into run configurations and create a new Gradle Config.
Then, as the gradle project, select the top level project.
For arguments, use --tests "*"
And now for the gradle tasks, this is a little bit more error-prone. Here is an example of how I have it setup for my project:
:androidrma:cleanTestGoogleDebugUnitTest :androidrma:testGoogleDebugUnitTest
:calendar:cleanTestDebugUnitTest :calendar:testDebugUnitTest
:gamification:cleanTest :gamification:test
:player:cleanTest :player:test
:playlists:cleanTest :playlists:test
:sleepjournal:cleanTest :sleepjournal:test
:sound-content-filters:cleanTest :sound-content-filters:test
Please note that I inserted new lines between each module for more clarity here, in the tasks, just separate each entry with a space.
For your app module, in my case named androidrma, you must use your build variants name in the cleanTestUnitTest and testUnitTest , in my case being GoogleDebug.
If we look at the calendar module, it is an android module, , so it still operates with the same logic as the appModule.
However, if you look at player, playlists, sleepjournal, etc. those are pure kotlin modules. The tasks thus differ in their syntax.
Once you have entered all this information and everything is functioning, I recommend checking "store as project file" checkbox at the top right of the run config setup screen.
This works in Android Studio 4.2 as well as Arctic Fox, haven't tested on other versions.
I am using Visual Studio 2015 IntelliTest to do the data driven test execution which is very useful tool for data driven testing. Visual Studio 2015 Intelli test is getting failed when I am running the intellitest from a project created by intellitest. But the same is passing when running the test from a test explorer window. We have implemented repository pattern and using Entity Framework 6.0 to carry out the database operations. Please find the stack trace below for more information. Can anyone help me on this?
System.NullReferenceException: Object reference not set to an instance of an object.
at System.Void MyFramework.Persistence.Entity.EF1Repository..ctor(System.String connectionStringName, System.String objContextName)
at System.Void MyFramework.Persistence.PersistenceManager..ctor(System.String schema, System.String module)
at System.Collections.Generic.List`1<MyDTO> BusinessLogic.ListedPassenger.Passengers(MyDTO entity)
at System.Collections.Generic.List`1<MyDTO> BusinessLogic.Tests.ListedPassengerTest.PassengersTest(BusinessLogic.ListedPassenger target, MyDTO entity)
Please ensure that your test is isolated from the environment using appropriate mock implementations (using Fakes, etc.). Take a look here for a hands-on example: https://blogs.msdn.microsoft.com/visualstudioalm/2015/08/14/intellitest-hands-on/.
I have a class in my Xamarin PCL which makes a call to System.Reflection.GetRuntimeProperties. For an example, let's say my PCL class has this method:
public string ExampleMethod(string arg) {
if(arg == null) return null;
IEnumerable<PropertyInfo> infos = this.GetType().GetRuntimeProperties();
return infos[0].Name;
}
I then have a Xamarin.UITest project which references the PCL project and tests this class. I have two test cases in my TestFixture so far, which for our example would be the following:
[Test]
public void TestExampleMethod_ArgNull_Null(){
Assert.That (exampleInstance.ExampleMethod(null), Is.Null);
}
[Test]
public void TestExampleMethod_ArgNotNull_NotNull(){
Assert.That (exampleInstance.ExampleMethod("testValue"), Is.NotNull);
}
When I run the Xamarin.UITest project, it compiles, runs the tests, and completes fine on both Android and iOS platforms. The TestExampleMethod_ArgNull_Null test passes since it returns early. However, the TestExampleMethod_ArgNotNull_NotNull test fails with:
System.MissingMethodException : Method 'RuntimeReflectionExtensions.GetRuntimeProperties' not found.
So it appears that even though everything is compiling just fine, and I am able to run other test cases fine, the Xamarin.UITest project is not able to use anything in System.Reflection. Does anyone know how I go about debugging this?
On my end, using the following failed to build:
IEnumerable<PropertyInfo> infos = this.GetType().GetRuntimeProperties();
return infos[0].Name;
due to not being able to do bracket indexes on and IEnumerable. I changed to this:
List<PropertyInfo> infos = this.GetType().GetRuntimeProperties().ToList();
return infos[0].Name;
And the project built and the tests ran successfully.
The class with the method using Reflection was in a PCL which was referenced from a UI Test project.
So basically I am not able to reproduce the error you got.
I posted this to Xamarin Support as well (thanks #jgoldberger) and we were able to figure out that it was due to a project setup issue. This is a project which uses Couchbase Lite which requires a specific version of Json.Net (6.0.4 as of this post). I must have accidentally updated the packages on some of the projects since the same version of Json.Net was not being used across all the projects (PCL, Android, iOS, and UITest). I ended up re-creating the project from scratch and that took care of it.
I am having problems with Teamcity, where it is proceeding to run build steps even if the previous ones were unsuccessful.
The final step of my Build configuration deploys my site, which I do not want it to do if any of my tests fail.
Each build step is set to only execute if all previous steps were successful.
In the Build Failure Conditions tab, I have checked the following options under Fail build if:
-build process exit code is not zero
-at least one test failed
-an out-of-memory or crash is detected (Java only)
This doesn't work - even when tests fail TeamCity deploys my site, why?
I even tried to add an additional build failure condition that will look for specific text in the build log (namely "Test Run Failed.")
When viewing a completed test in the overview page, you can see the error message against the latest build:
"Test Run Failed." text appeared in build log
But it still deploys it anyway.
Does anyone know how to fix this? It appears that the issue has been running for a long time, here.
Apparently there is a workaround:
So far we do not consider this feature as very important as there is
an obvious workaround: the script can check the necessary condition
and do not produce the artifacts as configured in TeamCity.
e.g. a script can move the artifacts from a temporary directory to the
directory specified in the TeamCity as publish artifacts from just
before the finish and in case the build operations were successful.
But that is not clear to me on exactly how to do that, and doesn't sound like the best solution either. Any help appreciated.
Edit: I was also able to workaround the problem with a snapshot dependency, where I would have a separate 'deploy' build that was dependent on the test build, and now it doesn't run if tests fail.
This was useful for setting the dependency up.
This is a known problem as of TeamCity 7.1 (cf. http://youtrack.jetbrains.com/issue/TW-17002) which has been fixed in TeamCity 8.x+ (see this answer).
TeamCity distinguishes between a failed build and a failed build step. While a failing unit test will fail the build as a whole, unfortunately TeamCity still considers the test step itself successful because it did not return a non-zero error code. As a result, subsequent steps will continue running.
A variety of workarounds have been proposed, but I've found they either require non-trivial setup or compromise on the testing experience in TeamCity.
However, after reviewing a suggestion from #arex1337, we found an easy way to get TeamCity to do what we want. Just add an extra Powershell build step after your existing test step that contains the following inline script (replacing YOUR_TEAMCITY_HOSTNAME with your actual TeamCity host/domain):
$request = [System.Net.WebRequest]::Create("http://YOUR_TEAMCITY_HOSTNAME/guestAuth/app/rest/builds/%teamcity.build.id%")
$xml = [xml](new-object System.IO.StreamReader $request.GetResponse().GetResponseStream()).ReadToEnd()
Microsoft.PowerShell.Utility\Select-Xml $xml -XPath "/build" | % { $status = $_.Node.status }
if ($status -eq "FAILURE") {
throw "Failing this step because the build itself is considered failed. This is our way to workaround the fact that TeamCity incorrectly considers a test step to be successful even if there are test failures. See http://youtrack.jetbrains.com/issue/TW-17002"
}
This inline PowerShell script is just using the TeamCity REST API to ask whether or not the build itself, as a whole, is considered failed (the variable %teamcity.build.id%" will be replaced by TeamCity with the actual build id when the step is executed). If the build as a whole is considered failed (say, due to a test failure), then this PowerShell script throws an error, causing the process to return a non-zero error code which results in the individual build step itself to be considered unsuccessful. At that point, subsequent steps can be prevented from running.
Note that this script uses guestAuth, which requires the TeamCity guest account to be enabled. Alternately, you can use httpAuth instead, but you'll need to update the script to include a TeamCity username and password (e.g. http://USERNAME:PASSWORD#YOUR_TEAMCITY_HOSTNAME/httpAuth/app/rest/builds/%teamcity.build.id%).
So, with this additional step in place, all subsequent steps set to execute "Only if all previous steps were successful" will be skipped if there are any previous unit test failures. We're using this to prevent automated deployment if any of our NUnit tests are not successful until JetBrains fixes the problem.
Thanks to #arex1337 for the idea.
Just to prevent confusion, this issue is fixed in Team City v8.x, We don't need those workarounds now.
You can specify the step execution policy via the Execute step option:
Only if build status is successful - before starting the step, the build agent requests the build status from the server, and skips the step if the status is failed.
https://confluence.jetbrains.com/display/TCD8/Configuring+Build+Steps
Of course you need to fail the build if at least one unit test failed:
https://confluence.jetbrains.com/display/TCD8/Build+Failure+Conditions
On the Build Failure Conditions page, the Fail build if area, specify when TeamCity will fail builds:
at least one test failed: Check this option to mark the build as failed if the build fails at least one test.
This is (as you have found) a known issue with TeamCity, there are a set of linked issues in their Issue Tracker. This issue is hopefully scheduled to be resolved in the next release of TeamCity (version 8.x)
In the mean time, the way we identified to resolve the issue (for version 6.5.5) was to download the test results file as part of the later steps. This was then parsed to check for any test failures, returning an error code and hence breaking the build properly (performing any cleanup we needed as part of that failure) which would probably work for you.
TeamCity build failure does not mean that it will stop the build and it will publish the artifacts if your build is providing the the build output files as required by TeamCity. It will only update the build status properly.
But, you can very well stop the build process by modification to your build script to stop the build on test case failure. If you are using MSBuild, then ContinueOnError="false" will do that.
In the end, I was able to solve the problem with a snapshot dependency, where I would have a separate 'deploy' build that was dependent on the test build, and now it doesn't run if tests fail.
This was useful for setting the dependency up.
I'm able to run my unit tests through VS2010 with the CodeRush but when I try to run the tests with Icarus Test Runner I get this error.
An exception was thrown while exploring tests.
Location: C:\XXX\XXX.Server.Tests\bin\Release\XXX.Server.Tests.DLL
Reference: XXXServer.Tests, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null Details: System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.
at System.Reflection.RuntimeModule.GetTypes(RuntimeModule module)
at System.Reflection.Assembly.GetTypes()
at Gallio.Common.Reflection.Impl.NativeAssemblyWrapper.GetTypes()
at Gallio.Framework.Pattern.TestAssemblyPatternAttribute.PopulateChildrenImmediately(IPatternScope assemblyScope, IAssemblyInfo assembly)
at Gallio.Framework.Pattern.TestAssemblyPatternAttribute.Consume(IPatternScope containingScope, ICodeElementInfo codeElement, Boolean skipChildren)
at Gallio.Framework.Pattern.DefaultPatternEvaluator.Consume(IPatternScope containingScope, ICodeElementInfo codeElement, Boolean skipChildren, IPattern defaultPrimaryPattern)
I've made sure Copy Local is set to True for project references.
Tests were being run on a 64bit machine with the test project Platform target set to "Any CPU" while the project being tested was set to "x86". So the machine was loading the tests as 64 which caused the error when trying to load the project being tested the same way.