Roslyn Source Generators Unit Testing error CS0234: The type or namespace name 'Console' does not exist in the namespace 'System' - roslyn

I am trying to write a unit test for a simple Roslyn Source Generator based on the example here.
Successfully ran both MsTest as well XUnit Tests as demonstrated in the Section B of the Unit testing of generators in the cookbook here.
My full solution that I ran is here for you to see.
The problem is as follows. I added a bit more code to the generator. It now looks as below.
context.AddSource("MyGeneratedFile.cs", SourceText.From(#"
namespace GeneratedNamespace
{
public class GeneratedClass
{
public static void GeneratedMethod()
{
// generated code
System.Console.WriteLine(""Hello..."");
}
}
}", Encoding.UTF8));
}
Note that I added the following line
System.Console.WriteLine(""Hello..."");
Now when I run the test, the test fails. Looking into the diagnostic object I see the following error.
error CS0234: The type or namespace name 'Console' does not exist in the namespace 'System' (are you missing an assembly reference?)
Definitely I am missing something.
New to Roslyn stuff, so how can I make the test pass? How do I add some references to the assembly that holds System namespace?
One way is to simply comment out the offending line, and yes its passing that way. But I am curious. How do I attach the required assembly that it is looking for?
Update
As #Youssef13 suggested, I added the MetadataReference for the console type
So that issue is gone, but now the following appears.
error CS0012: The type 'Object' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Runtime, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
I tried to add another MetadataReference for System.Object here, but this did not resolve it, the above persists.
How do I add a ref to System.Runtime?
Update 2.
Now I resolved it by adding the ref as follows. Take a look at it here.
var systemRuntime = Assembly.Load("System.Runtime, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a");
var systemRuntimeRef = MetadataReference.CreateFromFile(systemRuntime.Location);

I very very recommend the "Solution A" of the unit testing from the document you linked. But for "Solution B", you'll have to provide the assembly containing Console class. You can do that using:
MetadataReference.CreateFromFile(typeof(Console).Assembly.Location)
There is an IEnumerable<MetadataReference> parameter for CSharpCompilation.Create.
The testing library, however, does include everything you need for whatever .NET version you want. Which is more flexible.

Related

jacoco shows lines as not covered though the lines get executed while running the code

I have the below lines of code of which are indicated as not "executed" by Jacoco.
But when I debug the test case it does execute those lines. Below are the test cases I wrote.
#PrepareForTest({MessagingAdapterFactory.class, MessagingConfigReaderFactory.class,UpdaterServiceExecutor.class,Files.class})
#Test
public void should_shutDown_the_scheduledExecutor_and_close_the_messagingAdapter() throws Exception {
PowerMockito.mockStatic(Files.class);
PowerMockito.when(Files.exists(any())).thenReturn(true);
PowerMockito.mockStatic(MessagingAdapterFactory.class);
PowerMockito.when(MessagingAdapterFactory.getMessagingAdapter("edgeNode")).thenReturn(messagingAdapterMock);
PowerMockito.mockStatic(MessagingConfigReaderFactory.class);
PowerMockito.when(MessagingConfigReaderFactory.getConfigurationReader()).thenReturn(readerMock);
ScheduledExecutorService scheduledExecutorServiceMock = Mockito.mock(ScheduledExecutorService.class);
PowerMockito.mockStatic(Executors.class);
PowerMockito.when(Executors.newSingleThreadScheduledExecutor()).thenReturn(scheduledExecutorServiceMock);
when(readerMock.getConfigParams()).thenReturn("somePath,somePath,somePath");
when(decompressUtilMock.decompressZip(Matchers.anyString(),Matchers.anyString())).thenReturn(true);
when(checkSumUtilMock.check(anyString(), anyString())).thenReturn(true);
when(commandExecutorMock.executeCommand("somePath verify /pa somePathKubeUpdates/KubePlatformSetup.exe")).thenReturn(false);
updaterServiceExecutor.execute();
Thread.sleep(10000);
updaterServiceExecutor.close();
verify(scheduledExecutorServiceMock,timeout(10000).times(1)).shutdownNow();
verify(messagingAdapterMock,timeout(10000).times(1)).close();
}
#PrepareForTest({MessagingAdapterFactory.class, MessagingConfigReaderFactory.class,UpdaterServiceExecutor.class,Files.class})
#Test
public void should_not_throw_ServiceSDKException_when_occurred_while_closing_the_messagingAdapter() throws Exception {
PowerMockito.mockStatic(Files.class);
PowerMockito.when(Files.exists(any())).thenReturn(true);
PowerMockito.mockStatic(MessagingAdapterFactory.class);
PowerMockito.when(MessagingAdapterFactory.getMessagingAdapter("edgeNode")).thenReturn(messagingAdapterMock);
PowerMockito.mockStatic(MessagingConfigReaderFactory.class);
PowerMockito.when(MessagingConfigReaderFactory.getConfigurationReader()).thenReturn(readerMock);
ScheduledExecutorService scheduledExecutorServiceMock = Mockito.mock(ScheduledExecutorService.class);
PowerMockito.mockStatic(Executors.class);
PowerMockito.when(Executors.newSingleThreadScheduledExecutor()).thenReturn(scheduledExecutorServiceMock);
when(readerMock.getConfigParams()).thenReturn("somePath,somePath,somePath");
when(decompressUtilMock.decompressZip(Matchers.anyString(),Matchers.anyString())).thenReturn(true);
when(checkSumUtilMock.check(anyString(), anyString())).thenReturn(true);
when(commandExecutorMock.executeCommand("somePath verify /pa somePathKubeUpdates/KubePlatformSetup.exe")).thenReturn(false);
doThrow(new ServiceSDKException()).when(messagingAdapterMock).close();
updaterServiceExecutor.execute();
Thread.sleep(10000);
updaterServiceExecutor.close();
verify(scheduledExecutorServiceMock,timeout(10000).times(1)).shutdownNow();
verify(messagingAdapterMock,timeout(10000).times(1)).close();
}
What is wrong here? Why is Jacoco showing as the lines have not been executed? Please advice.
Jacoco and PowerMockito don't work together.
Jacoco instruments the byte code, collect the coverage data and afterwards associates the coverage information with the sourcecode based on some identifier of the class.
PowerMockito instruments the bytecode as well, this leads to different class identifiers so coverage calculated by Jacoco can not be associated to the source code because the identifier information does not match.
This is a known issue.
Gerald's answers is the reason. This only occurs when you have put the class being tested inside #PrepareForTest. So I removed that from certain methods and now its working fine. Having PowerMockito itself doesn't cause any issues. Issues arise only if you have the class name in #PrepareForTest. Find ways to manage it with only the name of the static method class and not the class for which you are writing the test cases.

Exclude groovy slf4j logging from condition coverage in Sonar with Jacoco

We are using SonarQube 5.1 with Jacoco maven plugin 0.7.4, and all of our slf4j logging statements such as log.debug('Something happened') show that only 1 of 2 branches are covered. I understand that this is because slf4j internally does an if debug, and that's great, but we don't want this to throw off our numbers. We aren't interested in testing slf4j and we'd rather not run every test multiple times for different logging levels.
So, how can we tell Sonar and/or Jacoco to exclude these lines from coverage? Both of them have configurable file exclusions, but from what I can tell those are only for excluding your own classes from coverage (using the target dir), not the imported libraries. I tried adding groovy.util.logging.*' to the exclusion list anyway but it didn't do anything.
logger.isDebugEnabled() is killing my code coverage. I'm planning to exclude it while running cobertura is similar and suggested that for Cobertura the 'ignore' property should be used instead of 'exclude'. I don't see anything like that for Jacoco or Sonar in settings or documentation.
EDIT:
Example image from Eclipse attached, after running Jacoco coverage (Sonar shows the same thing in their GUI). This is actual code from one of our classes.
EDIT 2:
We are using the Slf4j annotation. Docs here:
http://docs.groovy-lang.org/next/html/gapi/groovy/util/logging/Slf4j.html
This local transform adds a logging ability to your program using LogBack logging. Every method call on a unbound variable named log will be mapped to a call to the logger. For this a log field will be inserted in the class. If the field already exists the usage of this transform will cause a compilation error. The method name will be used to determine what to call on the logger.
log.name(exp)
is mapped to
if (log.isNameLoggable() {
log.name(exp)
}
Here name is a place holder for info, debug, warning, error, etc. If the expression exp is a constant or only a variable access the method call will not be transformed. But this will still cause a call on the injected logger.
Hopefully this clarifies what's going on. Our log statements become 2 branch ifs to avoid expensive string building for log levels that aren't enabled (a common practice, as far as I know). But that means that to guarantee coverage of all these branches, we have to run every test repeatedly for every logging level.
I did not find a general solution for excluding it, but if your codebase allows you to do so, you could wrap your logging statements in a method with an annotation containing "Generated" in its name.
A simple example:
package org.example.logging
import groovy.transform.Generated
import groovy.util.logging.Slf4j
#Slf4j
class Greeter {
void greet(name) {
logDebug("called greet for ${name}")
println "Hello, ${name}!"
}
#Generated
private logDebug(message) {
log.debug message
}
}
Unfortunately javax.annotation.Generated is not suitable, because it has only a retention of SOURCE, therefor I (ab)used groovy.transform.Generated here, but can easily create your own annotation for that purpose.
I found that solution here: How would I add an annotation to exclude a method from a jacoco code coverage report?
UPDATE: In Groovy you can solve it most elegantly with a trait:
package org.example.logging
import groovy.transform.Generated
import groovy.util.logging.Slf4j
#Slf4j
trait LoggingTrait {
#Generated
void logDebug(String message) {
log.debug message
}
}
...and then...
package org.example.logging
import groovy.util.logging.Slf4j
#Slf4j
class Greeter implements LoggingTrait {
void greet(name) {
logDebug "called greet for ${name}"
println "Hello, ${name}!"
}
}
Unfortunately the property log is interpreted as property of Greeter, not of LoggingTrait, so you must attach #Slf4j to the trait and the class implementing the trait.
Nevertheless doing so gives you the expected logger - the one of the implementing class:
14:25:09.932 [main] DEBUG org.example.logging.Greeter - called greet for world

Why doesn't the log4net XmlConfigurator attribute work for my unit tests

I'm using log4net, trying to get logging in my unit tests. If I manually call
log4net.Config.XmlConfigurator.Configure();
Since that works, that seems to eliminate all of the "bad config, config location" issues.
it works, but there are a large number of test classes, so that is not good.
I added
[assembly: log4net.Config.XmlConfigurator(Watch=true)]
to the assemblyinfo of my test project, but when I run (either via native MSTest, or Resharper test runner) I get no logging.
Help?
Source
[AssemblyInitialize()]
public static void MyTestInitialize(TestContext testContext)
{
// Take care the log4net.config file is added to the deployment files of the testconfig
FileInfo fileInfo;
string fullPath = Path.Combine(System.Environment.CurrentDirectory, "log4net.config");
fileInfo = new FileInfo(fullPath);
As it says in the documentation for assembly attributes
Therefore if you use configuration attributes you must invoke log4net
to allow it to read the attributes. A simple call to
LogManager.GetLogger will cause the attributes on the calling assembly
to be read and processed. Therefore it is imperative to make a logging
call as early as possible during the application start-up, and
certainly before any external assemblies have been loaded and invoked.
Because the unit test runners load the test assembly in order to find and the tests, it isn't possible to initialise log4net using an assembly attribute in unit test projects, and you will have to use the XmlConfigurator.
Edit: as linked in a comment by OP this can be done in one place for the whole test project by using the AssemblyInitializeAttribute

xunit programmatically add new tests/"[Facts]"?

We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.

Can I make Intellij Idea 11 IDE aware of assertEquals and other JUnit methods in Grails 2.0.x unit tests?

I find it very odd that with such excellent Grails integration, Idea does not recognize standard JUnit assertion methods in Grails unit tests. I created a brand new project and made one domain class with corresponding test to make sure it wasn't something weird with my larger project. Even if I add a #Test annotation, the IDE does not see any assertion methods
#TestFor(SomeDomain)
class SomeDomainTests {
#Test //thought adding this, not needed for Grails tests, would help but it doesn't
void testSomething() {
assertEquals("something", 1, 1); //test runs fine, but IDE thinks this method and any similar ones don't exist
}
}
I have created an issue in IntelliJ bugtracker: http://youtrack.jetbrains.com/issue/IDEA-82790. It will be fixed in IDEA 11.1.0
As workaround you can add "import static org.junit.Assert.*" to imports.
Note: using "assert 1 == 1 : 'message'" is preferable than "assertEquals('message', 1, 1)" in groovy code.
Idea has problems if you use 'def' to define a variable (so it's type is not known) and then you try to pass it to a Java method which is strongly typed. Because it can't infer the type.
So it will give a message with words to the effect of "there is no method assertEquals() that takes arguments with type String, null, null".
I wouldn't expect this message in the example you give (because you are using ints directly, not a dynamically-typed variable) but I thought you might have missed it when trying to create a simple example code snippet for the question.
With the #TestFor annotation an AST will add methods to you test class and IDEA does not catch these methods.
You have two options:
Make the test class extends GrailsUnitTestCase.
Add dynamic method to your test class.