I am calling some C++ code that tries to load a Java class, e.g.
JNIEnv *jenv = ...
jclass cls = jenv->FindClass("org/some/bundle/SomeClass");
Now, the problem is that this class resides in an OSGi bundle, and the code above cannot find my class.
This problem only arise when running unit tests (Tycho-surefire headless tests). Is there a simple way to force the OSGi framework to find my class from JNI? On the Java side, I suspect something like Dynamic-ImportPackage could have fixed my problem. I am unwilling to change the third party C++ library just to get it working with the test framework, so I prefer a solution on the Java test setup / configuration side, if possible.
The FindClass method of JNIEnv only searches the contents of the system ClassLoader as defined by the global application classpath. Since OSGi does not use the global classpath, it is no surprise that this doesn't work.
In general whenever loading a class, you need to specify not just the class name but also the classloader that should load it. This is an inevitable requirement of modularity. So your code needs to be able to find the bundle that you expect to contain the class, and then call its loadClass method. You can do this directly in C++ code, but it may be easier to write a Java utility method to do it and then just call that method from C++.
Well, I am not 100% sure that your case is like mine.
In my RCP I used to get the exception:
ClassNotFoundException: com.tool.packageA.IWantToLoadThisClass cannot be found by com.tool.packageB_1.0.0.qualifier
A simple solution was to:
Add com.tool.packageA to com.tool.packageB MANIFEST.MF Require-Bundle.
I though wanted to avoid that solution, because I was able to load other classes found in other packages normally com.tool.packageC, com.tool.packageD (This wasn't done by me though so I didn't know how it worked).
Searching around I came to find the other solution which I ended up using to keep things similar to the current working packages (com.tool.packageC, com.tool.packageD).
The solution was:
Using Eclipse-BuddyPolicy and Eclipse-RegisterBuddy see here for detailed info
This is how to get it to work:
Add Eclipse-BuddyPolicy: registered to com.tool.packageB MANIFEST.MF
Add Eclipse-RegisterBuddy: com.tool.packageB to com.tool.packageA MANIFEST.MF
Add Require-Bundle: com.tool.packageB to com.tool.packageA MANIFEST.MF
Now com.tool.packageA.IWantToLoadThisClass will be visible from com.tool.packageB and you will be able to find it when jenv->FindClass("com/tool/packageA/IWantToLoadThisClass");.
I hope this helps.
Related
I'm writing a Gradle plugin that interacts with an external HTTP API. This interaction is handled by a single class (let's call it ApiClient). I'm writing some high-level tests that use Gradle TestKit to simulate an entire build that uses the plugin, but I obviously don't want them to actually hit the API. Instead, I'd like to mock ApiClient and check that its methods have been called with the appropriate arguments, but I'm not sure how to actually inject the mocked version into the plugin. The plugin is instantiated somewhere deep within Gradle, and gets applied to the project being executed using its void apply(Project project) method, so there doesn't appear to be a way to inject a MockApiClient object.
Perhaps one way is to manually instantiate a Project, apply() the plugin to it (at which point, I can inject the mocked object because I have control over plugin instantiation), and then programmatically execute a task on the project, but how can I do that? I've read the Gradle API documentation and haven't seen an obvious way.
A worst-case solution will be to pass in a debug flag through the plugin extension configuration, which the plugin will then use to determine whether it should use the real ApiClient or a mock (which would print some easily grep-able messages to the STDOUT). This isn't ideal, though, since it's more fuzzy than checking the arguments actually passed to the ApiClient methods.
Perhaps you could split your plugin into a few different plugins
my-plugin-common - All the common stuff
my-plugin-real-services - Adds the "real" services to the model (eg RealApiClient)
my-plugin-mock-services - Adds "mock" services to the model (eg MockApiClient)
my-plugin - Applies my-plugin-real-services and my-plugin-common
my-plugin-mock - Applies my-plugin-mock-services and my-plugin-common
In the real world, people will only ever apply: 'my-plugin'
For testing you could apply: 'my-plugin-mock'
I am trying to migrate a legacy app that is using camel/cxf (offers some web services that include transformations) to Websphere Liberty 16.0.0.03 (IBM JRE 1.8). Tests are failing because the app uses extensions functions. I tried to disable secure processing as described here.
This change has no effect. That's why I try to switch to Saxon Implementation globally by setting System Property "javax.xml.transform.TransformerFactory=net.sf.saxon.TransformerFactoryImpl" in jvm.options config file. Again - this does not work.
While debugging I can see, that com.ibm.ws.webcontainer.osgi.mbeans.PluginGenerator$2 is overriding the Property with com.ibm.xtq.xslt.jaxp.compiler.TransformerFactoryImpl during Server start. I can see a method "PluginGenerator.revertTransformerFactoryIfNeccessary" in the stack that seems to trigger the change. Afterwards all FactoryFinder.find() will return the non-Saxon implementation.
Can anyone suggest how to either disable secure-processing successfully
or
a way to successfully set a custom TransformerFactory?
BTW: It seems to me like these 2 are bugs - do I report these as regular PMR?
EDIT: possible workaround
As result of the helpful suggestions I added an '#WebListener' that will sets the System Property within the constructor (setting it in contextInitialized is too late as stylesheets seem to be compiled during application start and thus processing fails tests). I bundle this a "patch-jar" with the legacy app.
The Liberty web container plugin generator will only override the xml transformer factory if the IBM JDK is being used.
While the web container performs plugin generation using the IBM JDK, it will swap to an alternate transformer factory, and then reset to the IBM JDK default of which is com.ibm.xtq.xslt.jaxp.compiler.TransformerFactoryImpl.
I think it is worth opening a PMR here. The PluginGenerator should not assume that it started with the default xml transformer factory, and should instead save off the value of javax.xml.transform.TransformerFactory and then restore it after plugin generation has completed.
Temp workaround:
Since the PluginGenerator only swaps the XML transformer factory if you're running on the IBM JDK, you could change to an alternate JDK until your PMR gets resolved.
I agree that this is a bug. The official route for reporting problems is a PMR, but there is enough here for us to understand the problem and fix it through our beta program. If you want to get an iFix on a released version of the product (rather than waiting for it to come out via the beta program) then you will need to raise a PMR.
I'm using ShrinkWrap to start Jetty server in my integration tests.
Problem:
When I start my test jetty-server and than make mockup of my controller - mockup doesn't work!
I suggest that the reason is different classloaders: JMockit - AppClassLoader, Jetty - WebAppClassLoader.
Question:
How to make mocking works fine?
P.S.
I've googled that -javaagent:jmockit.jar option may help. But it doesn't. Is it necessary for maven project based on 1.7 jdk?
ADDITION:
I've written demo to illustrate my problem. You can find it by the reference.
About my demo:
Except of ten stokes of code, it is identical to those project.
I've only added JMockit and a single mock to illustrate the problem.
You should see JettyDeploymentIntegrationUnitTestCase.requestWebapp method: in those method we make mock which doesn't work.
You can check that Jetty & JMockit loads classes by siblings classloaders, so JMockit simply doesn't see Jetty's classes
URLClassLoader
|
|-Launcher$AppClassLoader
|-WebAppClassLoader
The JUnit test in the example project is attempting to mock the ForwardingServlet class. But, in this scenario with an embedded Jetty web server, there are actually two instances of this class, both loaded in the same JVM but through different classloaders.
The first instance of the class is loaded by the regular classloader, through which classes are loaded from the thread that starts the JUnit test runner (AppClassLoader). So, when ForwardingServlet appears in test code, it is the one defined in this classloader. This is the class given to JMockit to mock, which is exactly what happens.
But then, a copy of ForwardingServlet is loaded inside the deployed web app (from the ".class" file in the file system, so not affected by the mocking as applied by JMockit, which is in-memory only), using Jetty's WebAppClassLoader. This class is never seen by JMockit.
There are two possible solutions to this issue:
Somehow get the class object loaded by WebAppClassLoader and then mock it by calling the MockUp(Class) constructor.
Configure the Jetty server so that it does not use a custom classloader for the classes in the web app.
The second solution is the easiest, and can be done simply by adding the following call on the ContextHandler object created from the WebArchive object, before setting the handler into the Jetty Server object:
handler.setClassLoader(ClassLoader.getSystemClassLoader());
I tested this and it worked as expected, with the #Mock doGet(...) method getting executed instead of the real one in ForwardingServlet.
I have a newbie but really important question for me: I have a Mac Os X application that uses carbon api, but it is still a C++ application. I need to debug which functions are called at execution time and then make a C++ patch to replace one of those functions.
The real goal: I need to log all text printed into a chat window that the application has inside an unnacessible carbon view. I thought at first it was a cocoa application, but it's not, so fscript and imlib are no good to inject code.
Is it possible? Any clues? Thank you very much.
Cheers :)
You could look into using truss to figure out what system calls are being made but I'm not sure for user-calls. The LD_PRELOAD environment variable can allow you to inject methods into other apps, but C++ methods tend to have various dependencies regarding name mangling and calling method so it would probably be tricky to plug in your own.
Can you just have the app maintainer add actual hooks to allow for what you need?
I need to host and run managed controls inside of a purely unmanaged C++ app. How to do this?
To run unlicensed controls is typically simple:
if (SUCCEEDED(ClrCreateManagedInstance(type, iid, &obj)))
{
// do something with obj
}
When using a licensed control however, we need to somehow embed a .licx file into the project (ref application licensing). In an unmanaged C++ app, the requisite glue does not seem to work. The lc.exe tool is supposed to be able to embed the license as an assembly resource but either we were not waving the correct invocation, or it failed silently. Any help would be appreciated.
The answer depends on the particular component you're using. Contact your component help desk OR read up the documentation on what it takes to deploy their component.
Basically component developers are free to implement licensing as they deem fit. With the .licx file the component needs to be able to do whatever the developer wished via GetKey and IsValidKey (explained in the link you posted).
So if GetKey checks for a .licx file in the component directory - you just need to make sure its there.
AFAIK the client assembly doesn't need to do anything except instantiate the control.
Also if you post the name of the component and the lc.exe command you're using, people could take a look..