I'm trying to call JAVA methods from C++ by JNI. First of all, I got this example, tried it out. I changed the path of JDK and after that I could run the examples and they properly worked.
After that, I tried to import my own JAR file and use one class from it. I copied the code from the example and replaced the classpath and classname with my own:
void MyCPPClass::CallJava()
{
JavaVM *jvm; // Pointer to the JVM (Java Virtual Machine)
JNIEnv *env; // Pointer to native interface
//==================== prepare loading of Java VM ============================
JavaVMInitArgs vm_args; // Initialization arguments
JavaVMOption* options = new JavaVMOption[ 1 ]; // JVM invocation options
options[ 0 ].optionString = "-Djava.class.path=MyJar.jar"; // where to find java .class
vm_args.version = JNI_VERSION_1_8; // minimum Java version
vm_args.nOptions = 1; // number of options
vm_args.options = options;
vm_args.ignoreUnrecognized = false; // invalid options make the JVM init fail
//================= load and initialize Java VM and JNI interface ===============
jint rc = JNI_CreateJavaVM( &jvm, (void**)&env, &vm_args ); // YES !!
delete options; // we then no longer need the initialisation options.
//========================= analyse errors if any ==============================
// if process interuped before error is returned, it's because jvm.dll can't be
// found, i.e. its directory is not in the PATH.
if( rc != JNI_OK )
{
exit( EXIT_FAILURE );
}
jclass cls1 = env->FindClass( "aaa/bbb/MyClass" );
...
}
The JAR contains only one class aaa.bbb.MyClass and it's made with IntelliJ IDEA and mvn package command. I copied the JAR file next to my executable.
A value of rc is always 0 (JNI_OK), but the value of cls1 is always NULL. I think that JVM can find the JAR, because when I'm debugging I can't delete the JAR after FindClass.
The JAR file contains the MyClass.class file, I checked it.
I have checked some previous questions ( 1, 2, 3 and some others) yet, but I couldn't get where I made a mistake.
UPDATE: I tried to copy the MyClass.class file and MyJar.jar file into the directory of the example linked before, and JVM can't find MyClass. It's possible that something missing from my java source file? The package declaration is correct.
What can cause that JVM can't find MyClass?
Moved solution from question to answer:
SOLUTION: I had to add the maven dependencies' JARs to the classpath. Now it works!
Related
I have some troubles with a non-working call to FindClass in my application. I've looked up many questions on this subject but none of these solved my problem...
The JVM creation code:
JavaVM *test_jvm;
JNIEnv *test_jenv;
JavaVMInitArgs vm_args; /* JDK 1.1 VM initialization arguments */
JNI_GetDefaultJavaVMInitArgs(&vm_args);
JavaVMOption options[1];
options[0].optionString = classpath;
vm_args.version = JNI_VERSION_1_4; /* New in 1.1.2: VM version */
vm_args.nOptions = 1;
vm_args.options = options;
vm_args.ignoreUnrecognized = false;
jint ret = JNI_CreateJavaVM(&test_jvm, reinterpret_cast<void**>(&test_jenv), &vm_args);
assert(ret == JNI_OK);
assert(test_jenv->FindClass("my/package/MyClass") != 0);
The definition of classpath:
char classpath[] = "-Djava.class.path=lib/somejar1.jar:"
"bin:"
"lib/somejar2.jar:"
"lib/somejar3.jar";
When I run this code, the assert() code fails, but when I run this (at the same location):
java -cp lib/somejar1.jar:bin:lib/somejar2.jar:lib/somejar3.jar my.package.MyClass
...everything works fine.
The class I am loading is very simple:
package my.package;
public class MyClass {
public static void main(String[] args) {
System.out.println("Hello World!");
}
};
I also tried without the main method but the result was the same. I can load classes from the JARs without issues, but I cannot load this one class from the bin directory...
I have checked many questions, but most of these were about incorrect classpath or incorrect naming convention... I double checked everything and I did not find any issue.
Some other answers mentioned problem with threads, but since I am creating the JVM from a C++ environment, I don't if this apply? And even if it does, this call to FindClass should not be problematic?
I would really appreciate a solution to this, or at least a way to debug in more depth what is happening...
Disclaimer: I found the root of the problem, and I am writting this answer because I did not find a canonical one for FindClass-related problem.
The issue was a difference between the java version of the javac executable used to compile the .java file (Java 1.7, standard Java installation on the machine) and the version of the JNI code (OpenJDK 1.5, custom installation).
Once I compiled the java files with the correct javac executable, the code worked without issue.
I have created a 64-bit executable using Visual Studio 2015, intended to be run on Windows 7. The executable is a C++ wrapper that calls the main method in a Java application via JNI. The application runs as expected, but in Windows Task Manager, on the "Process" tab, my application name has been prepended with 16 hex digits. So even though my application compiles to "someapp.exe", it is listed as "80005b29594d4a91someapp.exe" in the process list when I run it. Does anyone know why this is happening, and how to make it show up as just "someapp.exe" in the Task Manager?
EDIT 1:
I should note that the hex string is always the same when it appears in the name. However, there is a small percentage of the time I run my application when it actually has the expected name of "someapp.exe". I have not been able to figure out the pattern of when the hex string is prepended and when it is not, but I estimate the hex string appears 98% of the time it is executed.
EDIT 2:
This appears to be related somehow to the use of JNI. When I remove the JNI calls, this stops occuring altogether. The following represents the entirety of the C++ code making up the "someapp" application:
#include <jni.h>
#include <Windows.h>
#define STRING_CLASS "java/lang/String"
int main(size_t const argc, char const *const argv[]) {
// Modify the DLL search path
SetDefaultDllDirectories(LOAD_LIBRARY_SEARCH_SYSTEM32 |
LOAD_LIBRARY_SEARCH_DEFAULT_DIRS | LOAD_LIBRARY_SEARCH_USER_DIRS);
SetDllDirectoryA(R"(C:\Program Files\Java\jdk1.8.0_112\jre\bin\server)");
// Create and populate the JVM input arguments
JavaVMInitArgs vm_args;
vm_args.version = JNI_VERSION_1_8;
vm_args.ignoreUnrecognized = JNI_FALSE;
vm_args.nOptions = 2;
vm_args.options = new JavaVMOption[vm_args.nOptions];
// Set command-line options
vm_args.options[0].optionString = "-Dfile.encoding=UTF-8";
vm_args.options[1].optionString = "-Djava.class.path=someapp.jar";
// Create the JVM instance
JavaVM *jvm;
JNIEnv *env;
JNI_CreateJavaVM(&jvm, reinterpret_cast<void**>(&env), &vm_args);
// Get the main entry point of the Java application
jclass mainClass = env->FindClass("myNamespace/MainClass");
jmethodID mainMethod = env->GetStaticMethodID(
mainClass, "main", "([L" STRING_CLASS ";)V");
// Create the arguments passed to the JVM
jclass stringClass = env->FindClass(STRING_CLASS);
jobjectArray mainArgs = env->NewObjectArray(
static_cast<jsize>(argc - 1), stringClass, NULL);
for (size_t i(1); i < argc; ++i) {
env->SetObjectArrayElement(mainArgs,
static_cast<jsize>(i - 1), env->NewStringUTF(argv[i]));
}
env->CallStaticVoidMethod(mainClass, mainMethod, mainArgs);
// Free the JVM, and return
jvm->DestroyJavaVM();
delete[] vm_args.options;
return 0;
}
I have tried to remove the arguments passed to the Java main method, but that had no affect on the outcome.
EDIT 3:
Thanks to the suggestion from 1201ProgramAlarm, I realized that this was actually related to running from a dynamic ClearCase view. The "Image Path Name" column in the Task Manager was one of the following values, which directly correlates with the incorrect "Image Name" symptom that I was observing:
\view\view-name\someapp-path\someapp.exe
\view-server\views\domain\username\view-name.vws.s\00035\80005b29594d4a91someapp.exe
I would still like to know why this is happening, but since this only affects our development environment, fixing it has become low priority. For anyone else experiencing this problem, the following represents the relevant software installed in my environment:
Windows 7 Enterprise x64 SP1
Rational ClearCase Explorer 7.1.2.8
Visual Studio 2015 Update 3
Java x64 JDK 8u112
Run your application from a drive that isn't a ClearCase dynamic view.
The Image Name of the running process reference a file in a ClearCase view Storage (\\view\view-name\someapp-path\someapp.exe =>
\\view-server\views\domain\username\view-name.vws\.s\00035\80005b29594d4a91someapp.exe), the .vws meaning view storage.
See "About dynamic view storage directories":
Every view has a view storage directory. For dynamic views, this directory is used to keep track of which versions are checked out to your view and to store view-private objects
So a view storage exists both for snapshot and dynamic view.
But for dynamic view, that storage is also used to keep a local copy of the file you want to read/execute (all the other visible files are accessed through the network with MVFS: MultiVersion File System)
That is why you see \\view-server\views\domain\username\view-name.vws\.s\00035\80005b29594d4a91someapp.exe when you execute that file: you see the local copy done through MVFS by ClearCase.
Would you have used a snapshot view, you would not have seen such a complex path, since a snapshot view by its very nature does copy all files locally.
It appears as though the path is "correct" when I have not accessed the MVFS mount recently using Windows Explorer
That means the executable executed by Windows is still the correct one, while MVFS will be busy downloading that same executable from the Vob onto the inner folder of the view storage.
But once you re-execute it, that executable is already there (in the view storage), so MVFS will communicate its full path (again, in the view storage) to Windows (as seen in the Process Explorer)
I have a need to create a Tcl extension that calls a managed .NET DLL/Class Library. Currently, the structure of my application is Tcl > DLL Wrapper (C++ CLR) > .NET Class Library (VB.NET), where ">" represents a function call.
My VB.NET DLL just takes a value and returns it back, keeping it simple for now. In the end, this will do some more advanced stuff that makes use of some .NET functionality.
Public Class TestClass
Public Function TestFunction(ByVal param As Integer) As Integer
Return param
End Function
End Class
My Tcl Extension (C++ CLR) creates an object of the type above
int TestCmd(ClientData data, Tcl_Interp *interp, int objc, Tcl_Obj *CONST objv[])
{
// Check the number of arguments
if (objc != 2) {
Tcl_WrongNumArgs(interp, 0, objv, "arg");
return TCL_ERROR;
}
int param, result;
if (Tcl_GetIntFromObj(interp, objv[1], ¶m) != TCL_OK)
return TCL_ERROR;
SimpleLibrary::TestClass^ myclass = gcnew SimpleLibrary::TestClass(); //System.IO.FileNotFoundException
result = myclass->TestFunction(param);
Tcl_SetObjResult(interp, Tcl_NewIntObj(result));
return TCL_OK;
}
And finally, my Tcl script loads the extension and calls the function.
load SimpleTclExtension.dll
TestCmd 2
If my VB.NET DLL is in the same directory as my extension DLL, the extension crashes when it instantiates a TestClass object. I've noticed if the VB.NET DLL is relocated to C:\Tcl\bin, the extension will find it, and TestCmd can be called just fine. The problem is that this will eventually need to be deployed across a number of PCs, and it's preferred not to mingle my application's files with another application's.
It seems like there should be some configuration settings that will fix this problem, but I'm not sure where. Any help is greatly appreciated.
Firstly, depending on just what kind of Tcl application you are using you may want to look at Eagle which is a implementation of Tcl in CLR.
I think you are bumping into .Net's desire to only load assemblies from your application's directory or its immediate subdirectories. The application here is the tclsh/wish executable which is why moving the .Net assembly makes it load. This is something you can fix with suitable manifests or calls to the API to permit assembly loading from alternate locations. In this case I think you will need to run some initialization code in your Tcl extension when it gets loaded into the Tcl interpreter to init the CLR and add the extensions location as a suitable place to load assemblies from. It has been a while since I was looking at this so I forgot the details but I think you want to look at the AppDomain object and check the assembly loading path properties associated with that or its child objects. Try AppDomain.RelativeSearchPath
To be more specific, Eagle includes Garuda which is a Tcl extension built specifically to allow calling .Net from Tcl
I'm working on embedding Python in our test suite application. The purpose is to use Python to run several tests scripts to collect data and make a report of tests. Multiple test scripts for one test run can create global variables and functions that can be used in the next script.
The application also provides extension modules that are imported in the embedded interpreter, and are used to exchange some data with the application.
But the user can also make multiple test runs. I don't want to share those globals, imports and the exchanged data between multiple test runs. I have to be sure I restart in a genuine state to control the test environment and get the same results.
How should I reinitialise the interpreter?
I used Py_Initialize() and Py_Finalize(), but get an exception on the second run when initialising a second time the extension modules I provide to the interpreter.
And the documentation warns against using it more than once.
Using sub-interpreters seems to have the same caveats with extension modules initialization.
I suspect that I'm doing something wrong with the initialisation of my extension modules, but I fear that the same problem happens with 3rd party extension modules.
Maybe it's possible to get it to work by launching the interpreter in it's own process, so as to be sure that all the memory is released.
By the way, I'm using boost-python for it, that also warns AGAINST using Py_Finalize!
Any suggestion?
Thanks
Here is another way I found to achieve what I want, start with a clean slate in the interpreter.
I can control the global and local namespaces I use to execute the code:
// get the dictionary from the main module
// Get pointer to main module of python script
object main_module = import("__main__");
// Get dictionary of main module (contains all variables and stuff)
object main_namespace = main_module.attr("__dict__");
// define the dictionaries to use in the interpreter
dict global_namespace;
dict local_namespace;
// add the builtins
global_namespace["__builtins__"] = main_namespace["__builtins__"];
I can then use use the namespaces for execution of code contained in pyCode:
exec( pyCode, global_namespace, lobaca_namespace );
I can clean the namespaces when I want to run a new instance of my test, by cleaning the dictionaries:
// empty the interpreters namespaces
global_namespace.clear();
local_namespace.clear();
// Copy builtins to new global namespace
global_namespace["__builtins__"] = main_namespace["__builtins__"];
Depending at what level I want the execution, I can use global = local
How about using code.IteractiveInterpreter?
Something like this should do it:
#include <boost/python.hpp>
#include <string>
#include <stdexcept>
using namespace boost::python;
std::string GetPythonError()
{
PyObject *ptype = NULL, *pvalue = NULL, *ptraceback = NULL;
PyErr_Fetch(&ptype, &pvalue, &ptraceback);
std::string message("");
if(pvalue && PyString_Check(pvalue)) {
message = PyString_AsString(pvalue);
}
return message;
}
// Must be called after Py_Initialize()
void RunInterpreter(std::string codeToRun)
{
object pymodule = object(handle<>(borrowed(PyImport_AddModule("__main__"))));
object pynamespace = pymodule.attr("__dict__");
try {
// Initialize the embedded interpreter
object result = exec( "import code\n"
"__myInterpreter = code.InteractiveConsole() \n",
pynamespace);
// Run the code
str pyCode(codeToRun.c_str());
pynamespace["__myCommand"] = pyCode;
result = eval("__myInterpreter.push(__myCommand)", pynamespace);
} catch(error_already_set) {
throw std::runtime_error(GetPythonError().c_str());
}
}
I'd write another shell script executing the sequence of test scripts with new instances of python each time. Or write it in python like
# run your tests in the process first
# now run the user scripts, each in new process to have virgin env
for script in userScript:
subprocess.call(['python',script])
I have an Apache module that works properly on CentOS, but fails on Ubuntu. I have tracked the problem down to the fact that the request record contains a NULL value for the filename in the request_rec struct that Apache is passing as an argument to the hook function I've defined in my module to check the file type for the file being processed.
i.e.,
extern "C" module AP_MODULE_DECLARE_DATA MyModule = {
STANDARD20_MODULE_STUFF, // initializer
NULL, // create per-dir config
NULL, // merge per-dir config
NULL, // server config
NULL, // merge server config
MyCommandTable, // command table
MyRegisterHooks // register hooks
};
... with
static void MyRegisterHooks(apr_pool_t *p)
{
ap_hook_child_init(MyPluginInit, NULL, NULL, APR_HOOK_MIDDLE);
ap_hook_type_checker(MyCheckAppFileType, NULL, NULL, APR_HOOK_MIDDLE);
ap_hook_handler(MyFileHandler,NULL,NULL,APR_HOOK_MIDDLE);
}
... and, finally, the culprit function:
static int MyCheckAppFileType(request_rec *ap_req)
{
if(ap_req == NULL)
{
return DECLINED; // Not reached
}
if(ap_req->filename == NULL)
{
return DECLINED; // HERE is the problem ... Why is ap_req->filename NULL?
// On CentOS it is not NULL, only on Ubuntu.
}
// ...
}
I am using Apache 2.2 on both Ubuntu and on CentOS, and I have built the module from scratch on both systems independently.
FURTHER INFO ~3 months later:
I have discovered that building the module on CentOS, and then copying the binary over to Ubuntu, and it works. Taking the identical code and building it on Ubuntu causes the above failure at runtime. Therefore, the code does not necessarily seem to be the problem -- or at least there is something being handled by the compiler differently on the two systems that is causing success with the CentOS build but not the Ubuntu build.
I ran into an almost identical problem just now. I'm using a module that someone else wrote. On one test machine the module was doing authN and authZ correctly. On a different machine r->filename was NULL inside of the auth checker hook. Your question here came up first in my google search.
In my case I tracked it down to an incorrect usage of ap_set_module_config. The code looked like:
ap_set_module_config(r, &auth_my_module, attrs);
The source code for ap_set_module_config says this:
/**
* Generic accessors for other modules to set at their own module-specific
* data
* #param conf_vector The vector in which the modules configuration is stored.
* usually r->per_dir_config or s->module_config
* #param m The module to set the data for.
* #param val The module-specific data to set
*/
AP_DECLARE(void) ap_set_module_config(ap_conf_vector_t *cv, const module *m,
void *val);
I changed the the call to ap_set_module_config to this:
ap_set_module_config(r->per_dir_config, &auth_my_module, attrs);
And just like magic, r->filename is no longer NULL in the auth checker hook. I'm new to writing apache modules so I need to find out exactly what this means, but I'm glad I figured out where the problem is. You might check your code to see how you call ap_set_module_config.
Edit: I should add that I tracked this down by adding debug statements that showed me exactly when the filename field of the request_rec struct became NULL. For me it went from valid to NULL inside a hook I defined.
Good luck!