I have three files:
ScriptProcessor.java:
public interface ScriptProcessor {
public String someMethod();
}
ScriptProcessorDummy.java:
public class ScriptProcessorDummy implements ScriptProcessor {
public String someMethod{
return "some string";
}
}
In the main method, the code does the following:
URLClassLoader loader = null;
loader = new URLClassLoader(new URL[] {new URL("JARFILE")});
if (loader != null) {
ScriptProcessor processor = (ScriptProcessor) loader.loadClass("ScriptProcessorDummy").newInstance();
}
"JARFILE" contains class files of ScriptProcessor and ScriptProcessorDummy.
The code works fine when using JDK 1.4 but when using JDK 1.5, the typecast (to ScriptProcessor) fails with java.lang.ClassCastException.
Could somebody please tell me how to fix this.
Thanks,
Raj
You need to make your current class loader the parent of the new class loader you are loading, and make sure that there isn't a copy of the interface in your jar.
The problem you have suggests that you have two copies of the interface: one in the main class loader, one in your new one. The object returned uses the one in the separate jar, but your class is using the main one. They aren't the same. You have to make sure that Java uses the same '.class' for the interface when processing the loaded class as it did when it compiled your code.
The first thing to do is 'jar tf' on the jar and see if my hypothesis of two copies is correct. If so, remove it. Try running. If you now get a NoClassDef, fix the construction of the loader.
new URLClassLoader(urlArray, Thread.currentThread().getContextClassLoader());
assuming that your environment maintains the context class loader. Alternatively,
new URLClassLoader(urlArray, ScriptProcessor.class.getClassLoader());
Try using:
loader.loadClass("ScriptProcessorDummy", true).newInstance();
the boolean tell the classloader to resolve the class name.
Related
First i try to explain the story.
I wanted to extend an C++/MFC application with REST-APIs, and decided to use for this purpose Asp.Net-Core 5 in a library by bridging it to unmanaged code with C++/CLI library. (having a separete ASP application is doubled expenditure, and needed to be rewritten all the logic in C#. in that regard RESTful-Server should be in same process implemented.)
Asp-Host is started in that way; C++/MFC -> C++/CLI -> ASP-Library (ASP referenced in CLI, CLI referenced in Native)
First problem was; by building the host Microsoft.Extensions.Hosting.Abstractions-Assembly could not be resolved. I found that, "My CLI Library".runtimeconfig.json has wrong framework reference and causes it not to be resolved. That means generated runtimeconfig.json is wrong. After every build it has to be manually with AspNetCore instead of NETCore corrected. (Done in PostBuildEvent)
Next problem; during the ASP-Host build, ASP could not resolve "My ASP Library.dll"-Assembly (in reflection). This problem solved by OnAssemblyResolve event by giving the right path. I'm not sure whether it is correct solution. Because AppDomain.BaseDirectory is an empty string, maybe it is the cause of it, that the library could not found.
In AssemblyResolve event;
Assembly::LoadFrom(assemblyFile)
Finally i could start the server, and it works. Then needed to use dependency injection, and my service could not be resolved from the controller.
Then i used my ASP-Library in another C#-project to test and it works...
i'm sure that it doesn't work if the entry point of process is in unmanaged code.
public interface IServerImpl
{
void OnFail(String msg);
}
public class CServerImpl : IServerImpl
{
public void OnFail(String msg)
{
}
}
... in Startup
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IServerImpl, CServerImpl>();
services.AddControllers();
}
... Controller
[ApiController]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{
private readonly ILogger<WeatherForecastController> _logger;
private readonly IServerImpl _serverImpl;
public WeatherForecastController(ILogger<WeatherForecastController> logger, IServerImpl serverImpl)
{
_logger = logger;
_serverImpl = serverImpl;
}
...
}
Is there any workaround to have it working? Are those problems bug in Asp.Net-Core or what am i doing wrong?
Thanks in advance
The problem has been solved. Assembly resolving event with Assembly.LoadFrom() causes my ASP-assembly to be loaded (or mapped) twice. Loading it in different assembly-load-context means that, doubled Types, static variables and so on. In that regard my registered IServerImpl service searched in the controller with another IServerImpl type. Type names are identical, but not the same.
For more information see technical challenges: https://learn.microsoft.com/en-us/dotnet/api/system.runtime.loader.assemblyloadcontext?view=net-5.0
Another issue on GitHub: https://github.com/dotnet/runtime/issues/39783
In assembly resolve event, i'm returning now the loaded assembly like that, instead of loading it again;
AppDomain.CurrentDomain.GetAssemblies()
.SingleOrDefault(asm => asm.FullName.StartsWith(args.Name.Split(',')[0] + ","))
still wellcome answers which clarifies "why my ASP-assembly not found from the framework?"
Aside from recompiling rt.jar is there any way I can replace the currentTimeMillis() call with one of my own?
1# The right way to do it is use a Clock object and abstract time.
I know it but we'll be running code developed by an endless number of developers that have not implemented Clock or have made an implementation of their own.
2# Use a mock tool like JMockit to mock that class.
Even though that only works with Hotspot disabled -Xint and we have success using the code bellow it does not "persist" on external libraries. Meaning that you'd have to Mock it everywhere which, as the code is out of our control, is not feasible. All code under main() does return 0 milis (as from the example) but a new DateTime() will return the actual system millis.
#MockClass(realClass = System.class)
public class SystemMock extends MockUp<System> {
// returns 1970-01-01
#Mock public static long currentTimeMillis() { return 0; }
}
3# Re-declare System on start up by using -Xbootclasspath/p (edited)
While possible, and though you can create/alter methods, the one in question is declared as public static native long currentTimeMillis();. You cannot change it's declaration without digging into Sun's proprietary and native code which would make this an exercise of reverse engineering and hardly a stable approach.
All recent SUN JVM crash with the following error:
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x00000, pid=4668, tid=5736
4# Use a custom ClassLoader (new test as suggested on the comments)
While trivial to replace the system CL using -Djava.system.class.loader JVM actually loads up the custom classLoader resorting to the default classLoader and System is not even pushed trough the custom CL.
public class SimpleClassLoader extends ClassLoader {
public SimpleClassLoader(ClassLoader classLoader) {
super(classLoader);
}
#Override
public Class<?> loadClass(String name) throws ClassNotFoundException {
return super.loadClass(name);
}
}
We can see that java.lang.System is loaded from rt.jar using java -verbose:class
Line 15: [Loaded java.lang.System from C:\jdk1.7.0_25\jre\lib\rt.jar]
I'm running out of options.
Is there some approach I'm missing?
You could use an AspectJ compiler/weaver to compile/weave the problematic user code, replacing the calls to java.lang.System.currentTimeMillis() with your own code. The following aspect will just do that:
public aspect CurrentTimeInMillisMethodCallChanger {
long around():
call(public static native long java.lang.System.currentTimeMillis())
&& within(user.code.base.pckg.*) {
return 0; //provide your own implementation returning a long
}
}
I'm not 100% sure if I oversee something here, but you can create your own System class like this:
public static class System {
static PrintStream err = System.err;
static InputStream in = System.in;
static PrintStream out = System.out;
static void arraycopy(Object src, int srcPos, Object dest, int destPos, int length) {
System.arraycopy(src, srcPos, dest, destPos, length);
}
// ... and so on with all methods (currently 26) except `currentTimeMillis()`
static long currentTimeMillis() {
return 4711L; // Your application specific clock value
}
}
than import your own System class in every java file. Reorganize imports in Eclipse should do the trick.
And than all java files should use your applicatikon specific System class.
As I said, not a nice solution because you will need to maintain your System class whenever Java changes the original one. Also you must make sure, that always your class is used.
As discussed in the comments, it is possible that option #3 in the original question has actually worked, successfully replacing the default System class.
If that is true, then application code which calls currentTimeMillis() will be calling the replacement, as expected.
Perhaps unexpectedly, core classes like java.util.Timer would also get the replacement!
If all of the above are true, then the root cause of the crash could be the successful replacement of the System class.
To test, you could instead replace System with a copy that is functionally identical to the original to see if the crashes disappear.
Unfortunately, if this answer turns out to be correct, it would seem that we have a new question. :) It might go like this:
"How do you provide an altered System.currentTimeMillis() to application classes, but leave the default implementation in place for core classes?"
i've tried using javassist to remove the native currentTimeMills, add a pure java one and load it using bootclasspath/p, but i got the same exception access violation as you did. i believe that's probably because of the native method registerNatives that's called in the static block but it's really too much to disassemble the native library.
so, instead of changing the System.currentTimeMills, how about changing the user code? if the user code already compiled (you don't have source code), we can use tools like findbugs to identify the use of currentTimeMillis and reject the code (maybe we can even replace the call to currentTimeMills with your own implementation).
It is possible to automatically generate Sitecore templates just coding models? I'm using Sitecore 8.0 and I saw Glass Mapper Code First approach but I cant find more information about that.
Not sure why there isn't much info about it, but you can definitely model/code first!. I do it alot using the attribute configuration approach like so:
[SitecoreType(true, "{generated guid}")]
public class ExampleModel
{
[SitecoreField("{generated guid}", SitecoreFieldType.SingleLineText)]
public virtual string Title { get; set; }
}
Now how this works. The SitecoreType 'true' value for the first parameter indicates it may be used for codefirst. There is a GlassCodeFirstDataprovider which has an Initialize method, executed in Sitecore's Initialize pipeline. This method will collect all configurations marked for codefirst and create it in the sql dataprovider. The sections and fields are stored in memory. It also takes inheritance into account (base templates).
I think you first need to uncomment some code in the GlassMapperScCustom class you get when you install the project via Nuget. The PostLoad method contains the few lines that execute the Initialize method of each CodeFirstDataprovider.
var dbs = global::Sitecore.Configuration.Factory.GetDatabases();
foreach (var db in dbs)
{
var provider = db.GetDataProviders().FirstOrDefault(x => x is GlassDataProvider) as GlassDataProvider;
if (provider != null)
{
using (new SecurityDisabler())
{
provider.Initialise(db);
}
}
}
Furthermore I would advise to use code first on development only. You can create packages or serialize the templates as usual and deploy them to other environment so you dont need the dataprovider (and potential risks) there.
You can. But it's not going to be Glass related.
Code first is exactly what Sitecore.PathFinder is looking to achieve. There's not a lot of info publicly available on this yet however.
Get started here: https://github.com/JakobChristensen/Sitecore.Pathfinder
I am working on a system that uses drools to evaluate certain objects. However, these objects can be of classes that are loaded at runtime using jodd. I am able to load a file fine using the following function:
public static void loadClassFile(File file) {
try {
// use Jodd ClassLoaderUtil to load class into the current ClassLoader
ClassLoaderUtil.defineClass(getBytesFromFile(file));
} catch (IOException e) {
exceptionLog(LOG_ERROR, getInstance(), e);
}
}
Now lets say I have created a class called Tire and loaded it using the function above. Is there a way I can use the Tire class in my rule file:
rule "Tire Operational"
when
$t: Tire(pressure == 30)
then
end
Right now if i try to add this rule i get an error saying unable to resolve ObjectType Tire. My assumption would be that I would somehow need to import Tire in the rule, but I'm not really sure how to do that.
Haven't use Drools since version 3, but will try to help anyway. When you load class this way (dynamically, in the run-time, no matter if you use e.g. Class.forName() or Jodd), loaded class name is simply not available to be explicitly used in the code. I believe we can simplify your problem with the following sudo-code, where you first load a class and then try to use its name:
defineClass('Tire.class');
Tire tire = new Tire();
This obviously doesn't work since Tire type is not available at compile time: compiler does not know what type you gonna load during the execution.
What would work is to have Tire implementing some interface (e.g. VehiclePart). So then you could use the following sudo-code:
Class tireClass = defineClass('Tire.class');
VehiclePart tire = tireClass.newInstance();
System.out.println(tire.getPartName()); // prints 'tire' for example
Then maybe you can build your Drools rules over the interface VehiclePart and getPartName() property.
Addendum
Above make sense only when interface covers all the properties of dynamically loaded class. In most cases, this is not a valid solution: dynamically loaded classes simply do not share properties. So, here is another approach.
Instead of using explicit class loading, this problem can be solved by 'extending' the classloader class path. Be warn, this is a hack!
In Jodd, there is method: ClassLoaderUtil.addFileToClassPath() that can add a file or a path to the classloader in the runtime. So here are the steps that worked for me:
1) Put all dynamically created classes into some root folder, with the respect of their packages. For example, lets say we want to use a jodd.samples.TestBean class, that has two properties: number (int) and a value (string). We then need to put it this class into the root/jodd/samples folder.
2) After building all dynamic classes, extend the classloaders path:
ClassLoaderUtil.addFileToClassPath("root", ClassLoader.getSystemClassLoader());
3) load class and create it before creating KnowledgeBuilder:
Class testBeanClass = Class.forName("jodd.samples.TestBean");
Object testBean = testBeanClass.newInstance();
4) At this point you can use BeanUtils (from Jodd, for example:) to manipulate properties of the testBean instance
5) Create Drools stuff and add insert testBean into session:
knowledgeSession.insert(testBean);
6) Use it in rule file:
import jodd.samples.TestBean;
rule "xxx"
when
$t: TestBean(number == 173)
then
System.out.println("!!!");
end
This worked for me. Note that on step #2 you can try using different classloader, but you might need it to pass it to the KnowledgeBuilderFactory via KnowledgeBuilderConfiguration (i.e. PackageBuilderConfiguration).
Another solution
Another solution is to simply copy all object properties to a map, and deal with the map in the rules files. So you can use something like this at step #4:
Map map = new HashMap();
BeanTool.copy(testBean, map);
and later (step #5) add a map to Drools context instead of the bean instance. In this case it would be even better to use defineClass() method to explicitly define each class.
I've successfully loaded a C++ plugin using a custom plugin loader class. Each plugin has an extern "C" create_instance function that returns a new instance using "new".
A plugin is an abstract class with a few non-virtual functions and several protected variables(std::vector refList being one of them).
The plugin_loader class successfully loads and even calls a virtual method on the loaded class (namely "std::string plugin::getName()".
The main function creates an instance of "host" which contains a vector of reference counted smart pointers, refptr, to the class "plugin". Then, main creates an instance of plugin_loader which actually does the dlopen/dlsym, and creates an instance of refptr passing create_instance() to it. Finally, it passes the created refptr back to host's addPlugin function. host::addPlugin successfully calls several functions on the passed plugin instance and finally adds it to a vector<refptr<plugin> >.
The main function then subscribes to several Apple events and calls RunApplicationEventLoop(). The event callback decodes the result and then calls a function in host, host::sendToPlugin, that identifies the plugin the event is intended for and then calls the handler in the plugin. It's at this point that things stop working.
host::sendToPlugin reads the result and determines the plugin to send the event off to.
I'm using an extremely basic plugin created as a debugging plugin that returns static values for every non-void function.
Any call on any virtual function in plugin in the vector causes a bad access exception. I've tried replacing the refptrs with regular pointers and also boost::shared_ptrs and I keep getting the same exception. I know that the plugin instance is valid as I can examine the instance in Xcode's debugger and even view the items in the plugin's refList.
I think it might be a threading problem because the plugins were created in the main thread while the callback is operating in a seperate thread. I think things are still running in the main thread judging by the backtrace when the program hits the error but I don't know Apple's implementation of RunApplicationEventLoop so I can't be sure.
Any ideas as to why this is happening?
class plugin
{
public:
virtual std::string getName();
protected:
std::vector<std::string> refList;
};
and the pluginLoader class:
template<typename T> class pluginLoader
{
public: pluginLoader(std::string path);
// initializes private mPath string with path to dylib
bool open();
// opens the dylib and looks up the createInstance function. Returns true if successful, false otherwise
T * create_instance();
// Returns a new instance of T, NULL if unsuccessful
};
class host
{
public:
addPlugin(int id, plugin * plug);
sendToPlugin(); // this is the problem method
static host * me;
private:
std::vector<plugin *> plugins; // or vector<shared_ptr<plugin> > or vector<refptr<plugin> >
};
apple event code from host.cpp;
host * host::me;
pascal OSErr HandleSpeechDoneAppleEvent(const AppleEvent *theAEevt, AppleEvent *reply, SRefCon refcon) {
// this is all boilerplate taken straight from an apple sample except for the host::me->ae_callback line
OSErr status = 0;
Result result = 0;
// get the result
if (!status) {
host::me->ae_callback(result);
}
return status;
}
void host::ae_callback(Result result) {
OSErr err;
// again, boilerplate apple code
// grab information from result
if (!err)
sendToPlugin();
}
void host::sendToPlugin() {
// calling *any* method in plugin results in failure regardless of what I do
}
EDIT: This is being run on OSX 10.5.8 and I'm using GCC 4.0 with Xcode. This is not designed to be a cross platform app.
EDIT: To be clear, the plugin works up until the Apple-supplied event loop calls my callback function. When the callback function calls back into host is when things stop working. This is the problem I'm having, everything else up to that point works.
Without seeing all of your code it isn't going to be easy to work out exactly what is going wrong. Some things to look at:
Make sure that the linker isn't throwing anything away. On gcc try the compile options -Wl -E -- we use this on Linux, but don't seem to have found a need for it on the Macs.
Make sure that you're not accidentally unloading the dynamic library before you've finished with it. RAII doesn't work for unloading dynamic libraries unless you also stop exceptions at the dynamic library border.
You may want to examine our plug in library which works on Linux, Macs and Windows. The dynamic loading code (along with a load of other library stuff) is available at http://svn.felspar.com/public/fost-base/trunk/
We don't use the dlsym mechanism -- it's kind of hard to use properly (and portably). Instead we create a library of plugins by name and put what are basically factories in there. You can examine how this works by looking at the way that .so's with test suites can be dynamically loaded. An example loader is at http://svn.felspar.com/public/fost-base/trunk/fost-base/Cpp/fost-ftest/ftest.cpp and the test suite registration is in http://svn.felspar.com/public/fost-base/trunk/fost-base/Cpp/fost-test/testsuite.cpp The threadsafe_store holds the factories by name and the suite constructor registers the factory.
I completely missed the fact that I was calling dlclose in my plugin_loader's dtor and for some reason the plugins were getting destructed between the RunApplicatoinEventLoop call and the call to sendToPlugin. I removed dlclose and things work now.