I'm running the unit tests in a project we have. All of the tests pass, but some of them are dim, whereas others are normal in brightness. I'm using VS 2019 Enterprise. I'm guessing that this indicates something new, in terms of what it's conveying to the user. I just don't know what that might be. Here's a screen shot:
Related
I want to get an Impacted test result in MSTEST but not getting expected result. I have followed all the instructions written here - https://learn.microsoft.com/en-us/azure/devops/pipelines/test/test-impact-analysis?view=azure-devops
This is the log files of VSTS here you can see all the configuration done for Impact Analysis
This is test result image where I can not see Impacted results
My main branch is "Build Development" and child branch is "Mstest_UT" We have rebased it but still I did not get impacted result as expected.
After doing the research I got to know that Impacted test result gets only if all test cases are passed so I did that too but did not get such result.
[TestMethod]
public void GetAboutTideContent_Passing_Valid_Data()
{
iAboutTideEditorRepository.Setup(x => x.GetAboutTideContent(It.IsAny<ApplicationUser>())).Returns(new AboutTideEditor() { });
ResponseData<AboutTideEditor> actual = aboutTideService.GetAboutTideContent(It.IsAny<ApplicationUser>());
Assert.AreEqual(ProcessStatusEnum.Success, actual.Status);
}
I am writing a mock test in MSTEST.
I am expecting Impacted test result.
From what I understand from the link you provided for this test you should use this type of test from the start of your project ("growth and maturation off the test" hints towards some kind of deep-learning abilities of the software). If you're kicking in the test halfway, the program might be already locked in commitment of performing particular tests in a certain way (MS stuff remains sometimes having "black box approaches"). If that is the case you should override/reset it and run from the start without having the program or user have selected (detailed) tests. This off-course might set you back for several hours of testing. But consider spending and loosing more time in the search of what goes wrong; it keeps counting an d consuming time if its off the essence to minimize that. Check also the graph provided on the linked page its very informative about the order of actions (e.g. 6).
In your first "black-screen" there is a difference in the parallel setup (consider also below bullets). the black-screen states some dll files are not found in "test assembly". If there is a possibility to run a test-log you might want to check that too to see what typos might have occurred.
From the page:
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
In short: reset the whole test and run "fresh" to see if the errors persist.
I have been using Google Tests that someone else put together for me and it has worked great...until today. Now when I try to run a test I get a very mysterious problem and I am dead in the water. I had the IT guys even recreate my profile which did have some issues etc - still the same error with little to go on:
Any ideas? Anything. Dead in the water here.
If I don't try to run an individual unit test as I was doing, but simply press F5 from the testing framework (something I had never done actually as I always ran tests through the unit test framework) you can actually find a place where your program pukes.
My program was puking somewhere that had absolutely nothing to do with my unit test, so under the hood, it is clear that the framework needs to run and initialize a lot of stuff that has nothing to do with my tests - in this case it initialized an input stream from a file it could not find way outside of what I was doing. Once I tracked that down, we were good again.
It was very counterintuitive that something so orthogonal to what I was doing caused this. Resharper said they would try to give better error messages in the future.
I'm debugging a set of WCF services. Initially, I created some unit tests, but since I'm using threading I often receive "Aborted" or "Stopped" tests without any clear explanation why (this is a known bug in Visual Studio).
I found it extremely challenging to debug the services when I can't even read the log output, so I quickly wrote a custom Assert class and converted all unit tests to console applications. This way, I was able to fix a huge number of simple problems immediately that were hard to impossible before.
So I'm wondering if it is a good idea to write unit tests as (fully automated) console apps first and convert them to real (executes when launching unit tests in VS) tests later.
if you want to stick to the stand alone console app you can have a one fits all aproach: Change
the application type of the MsUnitTest (or NUnitTest) to "Console application"
add a public static void Main() that call your unittests you are interested in.
This exe is can run on its own or it runs in the unittest-ide.
I prefer a standalone consolerunner as described in how-do-i-use-mstest-without-visual-studio
We are just about to start a new project. The Proof of Concept (PoC) for this project was done simply using Win32. The plan is/was to flesh out the PoC, tidy the uglier parts and meet the requirements set by the project owners.
One of the requirements for the actual project is 100% code coverage but I can see problems ahead: How can I acheive 100% code coverage with Win32 - the message pump will be exceptionally difficult to test effectively?! I could compile to a DLL but won't there be code in the main app that won't be under coverage?
I am thinking of dropping the Win32 code and moving to MFC - at least then a lot of the boiler plate stuff will be hidden from view (and therefore coverage).
Any thoughts on the problem?
I mean WndProc, but the same applies for WinMain. How can you unit-test that?
I do testing but not unit-testing: I do system/integration testing.
If you exercise your (whole) application while it's running under a debugger/profiler/code coverage analyser, then of course you will find (and the coverage analyser will show) that WinMain etc. are being run (are being covered).
The question then might be, how do you automate the system/integration testing of the whole application? You might have a test framework with automates driving the GUI; I don't know of any myself, but for example there's a list here. Alternatively, it might be acceptable (to the client) if the acceptance test suite is a sequence of non-automated/manual tests.
See also Should one test internal implementation, or only test public behaviour?
I have noticed that if I have a set of regression tests and decide to change a property on one of my objects (DTO) from int to decimal for example - i make all the other changes and the tests pass like normal. But if this project is under source control (VSS specifically) this small change will cause something strange to happen...
Similar to this question
Testing in Visual Studio Succeeds Individually, Fails in a Set
But a little different. I can make this change, and try to run my tests and any test that has an assert around this new data type will fail, but if I then click "debug checked tests" and it then runs through the previously failed tests - they pass. No changes to the test code /etc
Does anyone know why this might be happening? I hate to work outside of source control but if my tests are not reliable ... why have them at all in this case ... and I live for testing code :P
Given the age of the question, I doubt it's still an issue for you, but I wonder if you have a bin or obj folders under source control or an assembly that is in them?
If they are then when you compile the app (before MSTest runs) the source controlled assemblies are going to be in read-only mode and won't get overridden by the compiler and thus your tests will be against out of date binaries.