Can Jacoco CLI report be filtered via classfiles - jacoco

Is it possible to report only for 1 or selective class(es) out of a jar file, So for example *.exec is recorded for an application but report is needed for only 1 class, now in --classfiles we can specify a JAR file like /path/trial.jar but not beyond that like /path/trial.jar/pkg1/class, Is there some option to do that.

<path> in --classfiles <path> is scanned recursively and all found class files are used for generation of report, so it can point not only on JAR files, but also on directories containing JAR files and class files, as well as can point on a single class file.
For example after compilation of Example.java
class Example {
class Inner {
}
public static void main(String[] args) {
}
}
into directory classes
javac Example.java -d classes
two class files will be produced as can be seen by command ls classes
'Example$Inner.class' Example.class
After generation of jacoco.exec
java -cp classes -javaagent:jacoco-0.8.0/lib/jacocoagent.jar Example
possible to generate report for whole classes directory using command
java -jar jacoco-0.8.0/lib/jacococli.jar report jacoco.exec --classfiles classes
and two class files will be analyzed
[INFO] Loading execution data file /tmp/example/jacoco.exec.
[INFO] Analyzing 2 classes.
or generate report for single Example.class using command
java -jar jacoco-0.8.0/lib/jacococli.jar report jacoco.exec --classfiles classes/Example.class
and only this class will be analyzed
[INFO] Loading execution data file /tmp/example/jacoco.exec.
[INFO] Analyzing 1 classes.
So you can unpack JAR file into directory and analyze separately whatever you want in it.

Related

While running a flex template NoClassDefFoundError: org/apache/beam/sdk/transforms/DoFn is thrown

I am running gcp flex templates in dataflow.
While starting a job from a template I am getting this exception:
Error: Unable to initialize main class.
com.mycompany.pubsubdfjobs.protocol1_0.PubSubRedis
Caused by: java.lang.NoClassDefFoundError: org/apache/beam/sdk/transforms/DoFn
DoFn is used in a lot of classes in the Project. I am able to run the code locally. All dependencies are in my gradle.build file. The gradle.build file is copied to the Dokerfile the is used for the template.
What could be the issue?
The error states that your JAR is not able to find the main class for your project. This class should be specified on your pom.xml file according to the documentation. This file needs to have the same structure as the sample presented in the documentation. You can make sure that the plugin tag on your pom.xml file matches the one found in the documentation.

Can i call java runnable jar using using Informatica java transformation directly with input and output parameters.?

I want to call this runnable jar with input and output parameters using function or in java expression in java transformation:
java -classpath a.jar:b.jar \
-Xms128m \
-Xmx1024m {main class} \
-i ${FILE_IN} \
-o ${FILE_OUT}
Is this possible?
i think you can but not the way you are thinking. You can put the third party jar file in infa lib folder and then import the program in java transformation. If that is a possibility then i think you can follow below steps.
Place Jar file in /infahome/Informatica8/server/bin/javalib/. Place same jar file in Infa client too and list it in JTX properties.
Set the class path in processes in IS
Restart infa.
Write Code in java transformation and compile.

Build AWS Java Lambda with Gradle, use shadowJar or buildZip for archive to upload?

Description
I am developing AWS Java Lambdas, with Gradle as my build tool.
AWS requires a "self-contained" Java archive (.jar, .zip, ...) to be uploaded, which has to include everything, my source code, the dependencies etc.
There is the Gradle plugin shadow for this purpose, it can be included like this:
import com.github.jengelman.gradle.plugins.shadow.transformers.Log4j2PluginsCacheFileTransformer
...
shadowJar {
archiveName = "${project.name}.jar"
mergeServiceFiles()
transform(Log4j2PluginsCacheFileTransformer)
}
build.dependsOn shadowJar
gradle build produces a file somefunction.jar, in my case it is 9.5MB in size.
AWS documentation suggests to
putting your dependency .jar files in a separate /lib directory
There are specific instructions how to do this on Creating a ZIP Deployment Package for a Java Function.
task buildZip(type: Zip) {
archiveName = "${project.name}.zip"
from compileJava
from processResources
into('lib') {
from configurations.runtimeClasspath
}
}
build.dependsOn buildZip
gradle build produces a file build/distributions/somefunction.zip, in my case it is 8.5MB in size.
Both archives, zip and jar, can be upload to AWS and run fine. Performance seems to be the same.
Question
Which archive to favor, Zip or (shdow)Jar?
More specific questions, which come to my mind:
AWS documetation says "This [putting your dependency .jar files in a separate /lib directory] is faster than putting all your function’s code in a single jar with a large number of .class files." Does anyone know, what exactly is faster? Build-time? Cold/warm start? Execution time?
When build the Zip, I am not using the shadowJar features mergeServiceFiles() and Log4j2PluginsCacheFileTransformer. Not using mergeServiceFiles should in worst case decrease the execution time. As long as I omit Log4j2 plugins, I can omit Log4j2PluginsCacheFileTransformer. Right?
Are there any performance considerations using the one or the other?

gtest unit tests target configuration file path

In my C++ application, I have a text file (dataFile.txt) that is installed on the Linux target machine in the following path:
/SoftwareHomeDir/Configuration/Application/dataFile.txt
This file exists on my Rational ClearCase source code environment under the path:
/ProjectName/config/Application/dataFile.txt
I am developping a unitTest in gtest that does following:
Read a specific data from dataFile.txt , if the data does not exist than write it into the file.
1) I am avoiding to create an environment variable to check whether I am in the compilation environment or the target machine. Then add additional test code in the final release. I really want to separate test code from final code.
2) I am not using any IDE (no visual studio, no qt, etc.), just notepad++
3) The compilatio. server is shared (access with a username, however the root folder "/" is shared. Which means that if I create the path "/SoftwareHomeDir/Confiugration/Application/dataFile.txt", it will be visible by all users, and if another user is running his gtest unitTest, he may overwrite my file.
4) In the final code, the path to the dataFile is hard coded, and it is very costly (will take few seconds to run) to implement a filesearch(filename) method to look for the file in the entire hard drive before reading the file.
Question:
I am looking for a solution to unit-test my code in the compilation environment that is using /ProjectName/config/Application/dataFile.txt
The solution to my problem was to combine gmock with gtest as described by the link
https://github.com/google/googletest/blob/master/googlemock/docs/CookBook.md#delegating-calls-to-a-fake
The only modification I made to my code is that instead of defining the path to the configuration data using #define, I created a function getConfigFilePath() that returns the hardcoded path of the configuration file in the installed application. From here, I mocked the class and in my mock, I call a fake getConfigFilePath() that returns, when the real code is executing, the hardcoded path of the config file in the project tree in ClearCase. This is precisely what I was looking for.

How to create a jar file?

How to create an executable jar project for Geb-Groovy based project in eclipse.
The following is the directory structure:
the pages package contains the groovy files
the testclasses package contains the test cases groovy files
the utils package contains the groovy files to read data of excel sheets.
Detailed instructions for creating the jar file would be highly appreciated.
If the project you are working with is a gradle project I would recommend looking at a task called "shadowJar" https://github.com/johnrengelman/shadow
in build.gradle would have something like this:
apply plugin: "com.github.johnrengelman.shadow"
mainClassName = '<Name of your Main Class>' //This will act as the jar's main point of entry the 'Main' method found in this class will be executed when the jar is executed
shadowJar {
manifest {
attributes 'Main-Class': mainClassName
}
}
Then you simply make a run the shadowJar task and a jar file is generated in your build folder. It should also contain all your dependencies as well.