While there is documentation regarding turning a jstring to a native string (string nativeString = env->GetStringUTFChars(jStringVariable, NULL);) I can't find an example which will convert a jboolean to a bool or a jint to an int.
Can anyone suggest how this is achieved?
You just need to cast jintto int using C style casts. Same for jboolean to bool (if you're using C99 bool type) or to uint8_t (if you're using std int types) or to unsigned char.
Open $NDK_ROOT/platforms/android-8/arch-arm/usr/include/jni.h and you'll see jint, jboolean etc are just typedefs.
To cast a jboolean (which may only contain the values JNI_FALSE or JNI_TRUE) to a native bool I would use something like this :
(bool)(jboolean == JNI_TRUE)
If perhaps the jboolean isn't coming from the JVM, then testing for jboolean != JNI_FALSE might be considered safer.
Same issue–fixed. In my case I'm using openFrameworks so I don't know if this applies to non-openFrameworks projects (haven't tested). However, it appears that the first two arguments in an external function are always "env" and "thiz" and these need to be defined explicitly for each new extern function.
extern "C"{
// casts the variable properly
void Java_com_package_JavaClass_someFunction( JNIEnv* env, jobject thiz, jboolean yourBool ){
myTestApp->someFunction( (bool) yourBool );
}
// "yourBool" will always be "1" because its taking the spot of "thiz" which is not null
void Java_com_package_JavaClass_someFunction( JNIEnv* env, jboolean yourBool ){
myTestApp->someFunction( (bool) yourBool );
}
// "yourBool" will always be "1" because its taking the spot of "env" which is not null
void Java_com_package_JavaClass_someFunction( jboolean yourBool ){
myTestApp->someFunction( (bool) yourBool );
}
}
The odd one out is jchar. It's defined as unsigned short, and depending on your compilation settings, that may or may not be equivalent to wchar_t. Depending on your underlying platform, you may be better off working with UTF8 strings. At least those are bitwise equivalent to ASCII for the ASCII subset of characters.
On Windows and Mac OS/Cocoa, however, the native wide string representation is exactly unsigned short. Java strings fit naturally into that.
If you just want to use it in if(..) statement, this is working for me without conversion:
if (jboolean == true) {
return true;
} else {
return false;
}
This has worked for me:
bool isValue = bool(jbooleanValue);
Related
I'd like to have access to the $HOME environment variable in a C++ program that I'm writing. If I were writing code in C, I'd just use the getenv() function, but I was wondering if there was a better way to do it. Here's the code that I have so far:
std::string get_env_var( std::string const & key ) {
char * val;
val = getenv( key.c_str() );
std::string retval = "";
if (val != NULL) {
retval = val;
}
return retval;
}
Should I use getenv() to access environment variables in C++? Are there any problems that I'm likely to run into that I can avoid with a little bit of knowledge?
There is nothing wrong with using getenv() in C++. It is defined by stdlib.h, or if you prefer the standard library implementation, you can include cstdlib and access the function via the std:: namespace (i.e., std::getenv()). Absolutely nothing wrong with this. In fact, if you are concerned about portability, either of these two versions is preferred.
If you are not concerned about portability and you are using managed C++, you can use the .NET equivalent - System::Environment::GetEnvironmentVariable(). If you want the non-.NET equivalent for Windows, you can simply use the GetEnvironmentVariable() Win32 function.
I would just refactor the code a little bit:
std::string getEnvVar( std::string const & key ) const
{
char * val = getenv( key.c_str() );
return val == NULL ? std::string("") : std::string(val);
}
If you are on Windows you can use the Win32 API GetEnvironmentVariable
On other linux/unix based systems use getenv
Why use GetEnvironmentVariable in Windows, from MSDN getenv:
getenv operates only on the data
structures accessible to the run-time
library and not on the environment
"segment" created for the process by
the operating system. Therefore,
programs that use the envp argument to
main or wmain may retrieve invalid
information.
And from MSDN GetEnvironment:
This function can retrieve either a
system environment variable or a user
environment variable.
In c++ you have to use std::getenv and #include <cstdlib>
A version of #Vlad's answer with some error checking and which distinguishes empty from missing values:
inline std::string get_env(const char* key) {
if (key == nullptr) {
throw std::invalid_argument("Null pointer passed as environment variable name");
}
if (*key == '\0') {
throw std::invalid_argument("Value requested for the empty-name environment variable");
}
const char* ev_val = getenv(key);
if (ev_val == nullptr) {
throw std::runtime_error("Environment variable not defined");
}
return std::string(ev_val);
}
Notes:
You could also replace the use of exceptions in the above with an std::optional<std::string> or, in the future, with an std::expected (if that ends up being standardized).
I've chosen safety over informativity here, by not concatenating the key into the what-string of the exception. If you make the alternative choice, try and limit copying from key to within reason (e.g. 100 characters? 200 characters?), and I'd also check these characters are printable, and sanitize those characters.
Yes, I know this is an old thread!
Still, common mistakes are, by definition, not new. :-)
The only reasons I see for not just using std::getenv(), would be to add a known default or to adopt common pattern/API in a framework. I would also avoid exceptions in this case (not generally though) simply because a non-value return is often enough a valid response for an environment variable. Adding the complexity of handling exceptions is counter-intuitive.
This is basically what I use:
const char* GetEnv( const char* tag, const char* def=nullptr ) noexcept {
const char* ret = std::getenv(tag);
return ret ? ret : def;
}
int main() {
int ret=0;
if( GetEnv("DEBUG_LOG") ) {
// Setup debug-logging
} else {
...
}
return (-1==ret?errno:0);
}
The difference between this and the other answers may seem small, but I find such small details are very rewarding when you form habits in how you code.
Just like the fact that getenv() returns a non-const pointer, which could easily lead to bad habits!
Sorry if this is a really dumb question but here goes.
I recently got into C++ and I have to modify a driver for a project I am working on. The problem is that my driver needs to take a string I'm storing in a void*. So basically, my question is, how can I cast this, or do this, in a very simple way?
void get_modulebase(int pid, void* value, void* data) {
PEPROCESS t_process;
UNICODE_STRING mod;
KAPC_STATE apc;
DbgPrint("Data: %s \n", data); //this prints the string as i need it
RtlInitUnicodeString(&mod, (PCWSTR)data); //this fails
PsLookupProcessByProcessId((HANDLE)pid, &t_process);
PVOID base_address = BBGetUserModule(t_process, &mod);
KeUnstackDetachProcess(&apc);
RtlCopyMemory(value, &base_address, 8);
ObfDereferenceObject(t_process);
}
This works for me, but I need to store the module name into data:
RtlInitUnicodeString(&mod, L"notepad.exe");
RtlInitUnicodeString is one of the few unsafe string calls left that Windows does not flag.
Refer: char* ( char pointer) and RtlInitUnicodeString() function
I suggest that you can use RTL_CONSTANT_STRING macro in wdk which can be used to create UNICODE_STRING in compile time.
Like this:
static const UNICODE_STRING foo = RTL_CONSTANT_STRING(L"notepad.exe");
I have a function like this:
typedef long long myint64;
typedef enum {
INT32_FIELD,
CHARP_FIELD,
INT64_FIELD,
} InfoType;
int32_t ReadInfo(void *handle, InfoType info, ...)
{
va_list arg;
va_start(arg, info);
void *argPtr = va_arg(arg, void*);
va_end(arg);
int32_t ret = 0;
int32_t *paramInt = NULL;
char **paramCharp = NULL;
myint64 *paramInt64 = NULL;
switch (info) {
case INT32_FIELD:
paramInt = static_cast<int32_t*>(argPtr);
*paramInt = functionWhichReturnsInt32();
break;
case CHARP_FIELD:
paramCharp = static_cast<char**>(argPtr);
*paramCharp = functionWhichReturnsCharPtr();
break;
case INT64_FIELD:
paramInt64 = static_cast<myint64*>(argPtr);
*paramInt64 = functionWhichReturnsInt64();
break;
default:
ret = -1;
break;
}
return ret;
}
Call this function like this from separated c file. This file does not include definition of ReadInfo function:
extern "C" {int32_t CDECL ReadInfo(intptr_t, int32_t, int32_t*);}
int32_t readInt()
{
int32_t value = 0;
int32_t *ptr = &value;
ReadInfo(handle, INT32_FIELD, ptr);
return value;
}
This call fails only under iOS arm64. arm7s and win32 work fine with this call. (Yes, our only 64 bit target platform is iOS arm64.)
In debugger I found that address of ptr in readInt function is different from what I got with:
void argPtr = va_arg(arg, void);
Am I working wrong with arg_list?
P.S. It is not a plain Objective C application. It is part of native Unity plugin. But in iOS Unity code is just transformed into Objective C/C++ from C#. That is why you can see second declaration:
extern "C" {int32_t CDECL ReadInfo(intptr_t, int32_t, int32_t*);}
It's not an issue of IL2CPP but an issue of iOS, or maybe the compiler.
The following code could reproduce the issue even on the latest Xcode (10.1) and iOS (12.1)
typedef int __cdecl (*PInvokeFunc) (const char*, int);
int test()
{
PInvokeFunc fp = (PInvokeFunc)printf;
fp("Hello World: %d", 10);
return 0;
}
The expected output is: Hello World: 10 but it will give Hello World: ??? (Random number) on iOS however.
I tried the same code on macOS and Linux and both of them work well.
I'm not sure if it relates to the Apple document or not:
Variadic Functions
The iOS ABI for functions that take a variable number of arguments is entirely different from the generic version.
Stages A and B of the generic procedure call standard are performed as usual—in particular, even variadic aggregates larger than 16 bytes are passed via a reference to temporary memory allocated by the caller. After that, the fixed arguments are allocated to registers and stack slots as usual in iOS.
The NSRN is then rounded up to the next multiple of 8 bytes, and each variadic argument is assigned to the appropriate number of 8-byte stack slots.
The C language requires arguments smaller than int to be promoted before a call, but beyond that, unused bytes on the stack are not specified by this ABI.
As a result of this change, the type va_list is an alias for char * rather than for the struct type specified in the generic PCS. It is also not in the std namespace when compiling C++ code.
https://developer.apple.com/library/archive/documentation/Xcode/Conceptual/iPhoneOSABIReference/Articles/ARM64FunctionCallingConventions.html
Updates:
The reply for Apple engineer:
Casting function pointers to add a different calling convention doesn’t change how the callee is represented, it only changes how the caller performs its call. printf already has a calling convention, and what you’re doing might happen to work for some combinations on some platforms, while not working on others. You want to declare a wrapper function instead, which has the desired calling convention, and which calls the function you want. You’ll need to marshal the arguments manually.
That is to say the variadic function can't be direct p/invoke unless IL2CPP generate wrapper function for it. Only a function pointer is not enough.
The reason of this problem was in IL2CPP, which generates calls of function with variable argument. And it does not use my types like InfoType, myint64. It uses platform specific types for info variable. And size maybe different I guess.
I just add 3 new function for Unity API:
int32_t ReadInfoInt(void *handle, InfoType info, int *ret);
int32_t ReadInfoInt64(void *handle, InfoType info, myint64 *ret);
int32_t ReadInfoStr(void *handle, InfoType info, char **ret);
In this function I just call ReadInfo.
It is workaround 100%, but it is better then fight with IL2CPP.
I am writing an adapter to combine two APIs (one in C and another in C++).
If a function is called on the one API I need to pass the callers ID and the function's arguments to an adapter and call the according function with this information passed.
Now aparently they can not be mapped directly as one interface requires C++ compilation and the name mangling would screw the other so that is why I am using a set of adapters in the first place.
As the number of arguments varies, I looked up variadic functions and found the idea pretty useful, however I am operating on POD only and have to deal with structs, enums and a lot of different arguments per call, which might need to be put back into a struct before feeding it to the target function.
Every example I stumbled upon was far simpler and involved mostly arithmetic operations like summing stuff up , finding largest numbers or printing. Mostly done with for loops on the var_list.
Maybe I got stuck on the idea and it won't work at all, but I am just curious...
Say I wanted to assign the arguments from the list to my target functions parameters (the order of the arguments passed is the correct one), what would be a good way?
BOOL Some_Function(
/* in */ CallerId *pObjectId,
/* in */ someDataType argument1 )
{
BOOL ret = Adapter_Call(pFunction, pObjectId, argument1);
return ret;
}
and so once I made it to the right adapter I want to do
BOOL Adapter_Call(*pFunction, *pObjectId, argument1, ...)
{
va_list args;
va_start(args, argument1);
/*go over list and do `var_list[i] = pFunctionArgList[i]` which is
of whatever type so I can use it as input for my function */
va_end(args);
pObjectId.pFunction(arg1,...,argn);
}
Can I access the input parameters of a function to perform assignments like this?
Has anyone done something like this before? Is there a conceptual mistake in my thinking?
All I found on the net was this, http://www.drdobbs.com/cpp/extracting-function-parameter-and-return/240000586but due to the use of templates I am not sure if it wouldn't create another problem and so in the end implementing an adapter for each and every single functioncall may be simpler to do.
A SO search only returned this: Dynamic function calls at runtime (va_list)
First, you should heed Kerrek's advice about extern "C". This is C++'s mechanism for giving an identifier C linkage, meaning that the name won't be mangled by the C++ compiler.
Sometimes, and adapter still needs to be written for a C++ interface, because it manipulates objects that do not map to a C POD. So, the adapter gives the C interface a POD or opaque pointer type to manipulate, but the implementation of that interface converts that into an C++ object or reference and then calls the C++ interface. For example, suppose you wanted to provide a C interface for C++ std::map<int, void *>, you would have a common header file in C and C++ that would contain:
#ifdef __cplusplus
extern "C" {
#endif
struct c_map_int_ptr;
// ...
// return -1 on failure, otherwise 0, and *data is populated with result
int c_map_int_ptr_find (struct c_map_int_ptr *, int key, void **data);
#ifdef __cplusplus
}
#endif
Then, the C++ code could implement the function like:
typedef std::map<int, void *> map_int_ptr;
int c_map_int_ptr_find (struct c_map_int_ptr *cmap, int key, void **data) {
map_int_ptr &map = *static_cast<map_int_ptr *>(cmap);
map_int_ptr::iterator i = map.find(key);
if (i != map.end()) {
*data = i->second;
return 0;
}
return -1;
}
Thus, there is no need to pass the arguments passed via the C interface through a variable argument adapter. And so, there is no need for the C++ code to tease out the arguments from a variable argument list. The C code calls directly into the C++ code, which knows what to do with the arguments.
I suppose if you are trying to implement some kind of automated C adapter code generator by parsing C++ code, you could think that using variable arguments would provide a regular mechanism to communicate arguments between the generated C code interface and the generated C++ adapter code that would call the original C++ interface. For such a scenario, the code for the above example would look something like this:
// C interface
typedef struct c_map_int_ptr c_map_int_ptr;
typedef struct c_map_int_ptr_iterator c_map_int_ptr_iterator;
//...
c_map_int_ptr_iterator c_map_int_ptr_find (c_map_int_ptr *map, int key) {
c_map_int_ptr_iterator result;
cpp_map_int_ptr_adapter(__func__, map, key, &result);
return result;
}
// C++ code:
struct cpp_adapter {
virtual ~cpp_adapter () {}
virtual void execute (va_list) {}
};
void cpp_map_int_ptr_adapter(const char *func, ...) {
va_list ap;
va_start(ap, func);
cpp_map_int_ptr_adapter_method_lookup(func).execute(ap);
va_end(ap);
}
//...
struct cpp_map_int_ptr_find_adapter : cpp_adapter {
void execute (va_list ap) {
map_int_ptr *map = va_arg(ap, map_int_ptr *);
int key = va_arg(ap, int);
c_map_int_ptr_iterator *c_iter = va_arg(ap, c_map_int_ptr_iterator *);
map_int_ptr::iterator i = map->find(key);
//...transfer result to c_iter
}
};
Where cpp_map_int_ptr_adapter_method_lookup() returns an appropriate cpp_adapter instance based on a table lookup.
(This is not so much a problem as an exercise in pedantry, so here goes.)
I've made a nice little program that is native to my linux OS, but I'm thinking it's useful enough to exist on my Windows machine too. Thus, I'd like to access Windows' environment variables, and MSDN cites an example like this:
const DWORD buff_size = 50;
LPTSTR buff = new TCHAR[buff_size];
const DWORD var_size = GetEnvironmentVariable("HOME",buff,buff_size);
if (var_size==0) { /* fine, some failure or no HOME */ }
else if (var_size>buff_size) {
// OK, so 50 isn't big enough.
if (buff) delete [] buff;
buff = new TCHAR[var_size];
const DWORD new_size = GetEnvironmentVariable("HOME",buff,var_size);
if (new_size==0 || new_size>var_size) { /* *Sigh* */ }
else { /* great, we're done */ }
}
else { /* in one go! */ }
This is not nearly as nice (to me) as using getenv and just checking for a null pointer. I'd also prefer not to dynamically allocate memory since I'm just trying to make the program run on Windows as well as on my linux OS, which means that this MS code has to play nicely with nix code. More specifically:
template <class T> // let the compiler sort out between char* and TCHAR*
inline bool get_home(T& val) { // return true if OK, false otherwise
#if defined (__linux) || (__unix)
val = getenv("HOME");
if (val) return true;
else return false;
#elif defined (WINDOWS) || defined (_WIN32) || defined (WIN32)
// something like the MS Code above
#else
// probably I'll just return false here.
#endif
}
So, I'd have to allocate on the heap universally or do a #ifdef in the calling functions to free the memory. Not very pretty.
Of course, I could have just allocated 'buff' on the stack in the first place, but then I'd have to create a new TCHAR[] if 'buff_size' was not large enough on my first call to GetEnvironmentVariable. Better, but what if I was a pedant and didn't want to go around creating superfluous arrays? Any ideas on something more aesthetically pleasing?
I'm not that knowledgeable, so would anyone begrudge me deliberately forcing GetEnvironmentVariable to fail in order to get a string size? Does anyone see a problem with:
const DWORD buff_size = GetEnvironmentVariable("HOME",0,0);
TCHAR buff[buff_size];
const DWORD ret = GetEnvironmentVariable("HOME",buff,buff_size);
// ...
Any other ideas or any suggestions? (Or corrections to glaring mistakes?)
UPDATE:
Lots of useful information below. I think the best bet for what I'm trying to do is to use a static char[] like:
inline const char* get_home(void) { // inline not required, but what the hell.
#if defined (__linux) || (__unix)
return getenv("HOME");
#elif defined (WINDOWS) || defined (WIN32) || defined (_WIN32)
static char buff[MAX_PATH];
const DWORD ret = GetEnvironmentVariableA("USERPROFILE",buff,MAX_PATH);
if (ret==0 || ret>MAX_PATH)
return 0;
else
return buff;
#else
return 0;
#endif
}
Perhaps it's not the most elegant way of doing it, but it's probably the easiest way to sync up what I want to do between *nix and Windows. (I'll also worry about Unicode support later.)
Thank you to everybody who has helped.
DWORD bufferSize = 65535; //Limit according to http://msdn.microsoft.com/en-us/library/ms683188.aspx
std::wstring buff;
buff.resize(bufferSize);
bufferSize = GetEnvironmentVariableW(L"Name", &buff[0], bufferSize);
if (!bufferSize)
//error
buff.resize(bufferSize);
Of course, if you want ASCII, replace wstring with string and GetEnvironmentVariableW with GetEnvironmentVariableA.
EDIT: You could also create getenv yourself. This works because
The same memory location may be used in subsequent calls to getenv, overwriting the previous content.
const char * WinGetEnv(const char * name)
{
const DWORD buffSize = 65535;
static char buffer[buffSize];
if (GetEnvironmentVariableA(name, buffer, buffSize))
{
return buffer;
}
else
{
return 0;
}
}
Of course, it would probably be a good idea to use the wide character versions of all of this if you want to maintain unicode support.
This wasn't the original question, but it might worth to add the MFC way to this thread for reference:
CString strComSpec;
if (strComSpec.GetEnvironmentVariable(_T("COMSPEC")))
{
//Do your stuff here
}
VC++ implements getenv in stdlib.h, see, for example, here.
The suggestion you made at the end of your post is the right way to do this - call once to get required buffer size and then again to actually get the data. Many of the Win32 APIs work this way, it's confusing at first but common.
One thing you could do is to pass in a best-guess buffer and its size on the first call, and only call again if that fails.
Don't bother. %HOME% is a path on Windows, and should be usable by all reasonable programs. Therefore, it will fit in a WCHAR[MAX_PATH]. You don't need to deal with the edge case where it's longer than that - if it's longer, most file functions will reject it anyway so you might as well fail early.
However, do not assume you can use a TCHAR[MAX_PATH] or a char[MAX_PATH]. You do not have control over the contents of %HOME%; it will contain the users name. If that's "André" (i.e. not ASCII) you must store %HOME% in a WCHAR[MAX_PATH].