64 Bit Android compiling but has problems at runtime - c++

I have an Android App with a big C++ library which runs smoothly when compiled for
APP_ABI := armeabi-v7a //32 bit
but has issues when compiled with
APP_ABI := arm64-v8a //64 bit
All JNI jint variables have been converted to jlong variables according to the NDK docs.
My problem is that for some reason I can not compare variables of any data type other than int when they are assigned from a function varible.
This works:
unsigned long a = 200;
unsigned long b = 200;
if(a == b) {
LOGE("got here"); //This works
}
This fails:
void myClass::MyFunction(unsigned long c, unsigned long d) {
if(c == d) {
LOGE("got here"); //This does NOT work
}
}
Mind you, both above functions work in the 32 bit build. The values that I read from the variables c and d are identical when logged.
Interestingly this works in the 64 bit version (int variables):
void myClass::MyFunction(int e, int f) {
if(e == f) {
LOGE("got here"); //This works
}
}
Only integers can be compared. I have tried long, double, long long, unsigned and signed...
My NDK version is 10d (latest). I have tried with both, the 32 and 64 bit versions of the NDK and the result is the same. My development platform is a Win7 64 bit desktop.
Am I missing somethign essential?

Found the solution to my problems:
Data type sizes are different on 64 bit so my library was expecting 4 byte but longs are 8 bytes (4 bytes when compiled for 32 bit). Typecasting them to uint32_t did the trick.
Normal int worked because thats 4 bytes by default.

Related

Passing pointer of unsigned int to pointer of long int

I have a sample code which is working properly in 32 bit system, but when I cross compile it for 64-bit system and try to run on 64 bit Machine, it behaves differently.
Can anyone tell me why this is happening?
#include <usr/include/time.h>
#include <sys/time.h>
void func(time_t * inputArg)
{
printf("%ld\n",*inputArg);
}
int main()
{
unsigned int input = 123456;
func((time_t *)&input);
}
Here "time_t" is a type defined in linux system library header file which is of type "long int".
This code is working fine with a 32-bit system but it isn't with 64-bit.
For 64-bit I have tried this:
#include <usr/include/time.h>
#include <sys/time.h>
void func(time_t * inputArg)
{
printf("%ld\n",*inputArg);
}
int main()
{
unsigned int input = 123456;
time_t tempVar = (time_t)input
func(&tempVar);
}
Which is working fine, but I have used the first method in my whole application a number of times. Any alternate solutions would be appreciated.
can anyone tell me why this is happening?
Dereferencing an integer pointer of different size than the type of the pointed object has undefined behaviour.
If the pointed to integer is smaller than the pointers pointed type, you will read unrelated bytes as part of the dereferenced number.
Your fix works because you pass a pointer to an object of proper type, but consider that your input cannot represent all of the values that time_t can.
Best fix is to use the proper type initially. Use time_t as the input.
Your "fixed" code has a cast that lets the compiler convert your unsigned int value to a time_t. Your original code assumes that they're identical.
On your 32-bit system they are identical, so you get lucky. On your 64-bit system you find out what happens when you invoke Undefined Behavior.
In other words, both C and C++ allow you to cast pointers to whatever you want, but it's up to you to make sure such casts are safe.
thank you for response.
Actually I found my mistake when I printed sizeof 'long int' in 32bit machine and 64bit machine.
32bit machine :
sizeof(int) = 32bit & sizeof(long int) = 32 bit & sizeof(long long int) = 64 bit
64bit machine:
sizeof(int) = 32bit & sizeof(long int) = 64 bit & sizeof(long long int) = 64 bit

C++ - ridiculous long long range

Me and my friend have just encountered a very weird issue with long long range.
So basically, my computer has a 64bit processor but a 32 bit system on it. He has both 32 bit OS and CPU.
First, we printfed sizeof(long long). For both of us that was 8.
Then we did this:
long long blah = 1;
printf ("%lld\n", blah<<40);
For me this returns 1099511627776 (which is the correct result). For him it is 0.
How is that possible? We both have the same sizeofs.
Thanks in advance.
EDIT:
I compiled and ran it under Win7 with Code Blocks 12.11. He uses Win XP and the same version of CB.
EDIT2: Source codes as requested:
#include <cstdio>
int main()
{
long long blah = 1;
printf ("%lld\n", blah<<40);
return 0;
}
and
#include <cstdio>
int main()
{
printf ("%d", sizeof(long long));
return 0;
}
I would guess that you and your friends are linking to different versions of the infamous MSVCRT.DLL or possibly some other library.
From the Code::Blocks FAQ:
Q: What Code::Blocks is not?
A: Code::Blocks is not a compiler, nor a linker. Release packages of Code::Blocks may include a compiler suite (MinGW/GCC), if not provided by the target platform already. However, this is provided "as-is" and not developed/maintained by the Code::Blocks development team.
So the statement "I compiled and ran it under Win7 with Code Blocks 12.11" isn't strictly true; you cannot compile with something that isn't a compiler.
Figure out what compiler you two are actually using (see, above: it is not "Code Blocks") and what library.
Could be one of two problems: either the printing system cannot cope with long long or the shift operator does not work over 32 bits. Try this
#include <cstdio>
int main()
{
union
{
long long one;
// Intel based systems are back to front
struct {
long lo;
long hi;
} two;
} xxx;
xxx.one = 1LL;
xxx.one = xxx.one << 40;
printf ("%016llx %08x %08x\n", xxx.one, xxx.two.hi, xxx.two.lo);
return 0;
}
If the first number is all zeros but one of the other two isn't, then it is the printf that cannot cope. If all the numbers are zeros, then the shift operator isn't defined for 64 bits.

Pointers Casting Endianness

#include "stdio.h"
typedef struct CustomStruct
{
short Element1[10];
}CustomStruct;
void F2(char* Y)
{
*Y=0x00;
Y++;
*Y=0x1F;
}
void F1(CustomStruct* X)
{
F2((char *)X);
printf("s = %x\n", (*X).Element1[0]);
}
int main(void)
{
CustomStruct s;
F1(&s);
return 0;
}
The above C code prints 0x1f00 when compiled and ran on my PC.
But when I flash it to an embedded target (uController) and debugging, I find that
(*X).Element1[0] = 0x001f.
1- Why the results are different on PC and on the embedded target?
2- What can I modify in this code so that it prints 0x001f in the PC case,
without changing the core of code (by adding a compiler option or something maybe).
shorts are typically two bytes and 16 bits. When you say:
short s;
((char*)&s)[0] = 0x00;
((char*)&s)[1] = 0x1f;
This sets the first of those two bytes to 0x00 and the second of those two bytes to 0x1f. The thing is that C++ doesn't specify what setting the first or second byte does to the value of the overall short, so different platforms can do different things. In particular, some platforms say that setting the first byte affects the 'most significant' bits of the short's 16 bits and setting the second byte affects the 'least significant' bits of the short's 16 bits. Other platforms say the opposite; That setting the first byte affect the least significant bits and setting the second byte affects the most significant bits. These two platform behaviors are referred to as big-endian and little-endian respectively.
The solution to getting consistent behavior independent of these differences is to not access the bytes of the short this way. Instead you should simply manipulate the value of the short using methods that the language does define, such as with bitwise and arithmetic operators.
short s;
s = (0x1f << 8) | (0x00 << 0); // set the most significant bits to 0x1f and the least significant bits to 0x00.
The problem is that, for many reasons, I can only change the body of the function F2. I can not change its prototype. Is there a way to find the sizeof Y before it have been castled or something?
You cannot determine the original type and size using only the char*. You have to know the correct type and size through some other means. If F2 is never called except with CustomStruct then you can simply cast the char* back to CustomStruct like this:
void F2(char* Y)
{
CustomStruct *X = (CustomStruct*)Y;
X->Element[0] = 0x1F00;
}
But remember, such casts are not safe in general; you should only cast a pointer back to what it was originally cast from.
The portable way is to change the definition of F2:
void F2(short * p)
{
*p = 0x1F;
}
void F1(CustomStruct* X)
{
F2(&X.Element1[0]);
printf("s = %x\n", (*X).Element1[0]);
}
When you reinterpret an object as an array of chars, you expose the implementation details of the representation, which is inherently non-portable and... implementation-dependent.
If you need to do I/O, i.e. interface with a fixed, specified, external wire format, use functions like htons and ntohs to convert and leave the platform specifics to your library.
It appears that the PC is little endian and the target is either big-endian, or has 16-bit char.
There isn't a great way to modify the C code on the PC, unless you replace your char * references with short * references, and perhaps use macros to abstract the differences between your microcontroller and your PC.
For example, you might make a macro PACK_BYTES(hi, lo) that packs two bytes into a short the same way, regardless of machine endian. Your example becomes:
#include "stdio.h"
#define PACK_BYTES(hi,lo) (((short)((hi) & 0xFF)) << 8 | (0xFF & (lo)))
typedef struct CustomStruct
{
short Element1[10];
}CustomStruct;
void F2(short* Y)
{
*Y = PACK_BYTES(0x00, 0x1F);
}
void F1(CustomStruct* X)
{
F2(&(X->Element1[0]));
printf("s = %x\n", (*X).Element1[0]);
}
int main(void)
{
CustomStruct s;
F1(&s);
return 0;
}

Is it possible to assign long long returne value to int64_t without losing the precision in a 64-bit Machine?

I have implemented below code :
#include<stdio.h>
#include<string.h>
#include<ctype.h>
#include<cstdlib>
#include<sys/types.h>
main()
{
int64_t i64value1 = 0;
int64_t i64value2 = 0;
long long llvalue = 0;
const char *s = "10811535359";
i64value1 = atoll(s);
llvalue = atoll(s);
i64value2 = llvalue;
printf("s : [%s]\n",s);
printf("i64value1 : [%d]\n",i64value1);
printf("llvalue : [%lld]\n",llvalue);
printf("i64value2 : [%d]\n",i64value2);
}
Output of the above progrom is :
s : [10811535359]
i64value1 : [-2073366529]
llvalue : [10811535359]
i64value2 : [-2073366529]
The compiler used is :
gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)
The OS is x86_64 GNU/Linux 2.6.18-194
Since long long is a signed 64-bit integer and is, for all intents and purposes, identical to int64_t type, logically int64_t and long long should be equivalent types. And some places mention to use int64_t instead of long long.
But when I look at stdint.h, it tells me why I see the above behavior:
# if __WORDSIZE == 64
typedef long int int64_t;
# else
__extension__
typedef long long int int64_t;
# endif
In a 64-bit compile, int64_t is long int, not a long long int.
My Question is, Is there a workaround/solution to assign long long returned value to int64_t without losing the precision in 64 bit Machine?
Thanks in advance
The loss does not happen in the conversions but in the printing:
printf("i64value1 : [%d]\n",i64value1);
The int64_t argument is accessed as if it were an int. This is undefined behaviour, but usually the low 32 bits are sign extended.
Proper compiler warnings (such as gcc -Wformat) should complain about this.
Jilles is right.
Either use std::cout << which I believe should handle it the right way, or using printf("i64value2 : [%lld]\n",i64value2); while printing should solve it.
Under the x86-64 ABI - long long is the same as long which is to say a 64bit value. There is no precision loss in this case.

C++ long overflowing prematurely

I'm having a bizarre problem with C++ where the long data type is overflowing long before it should. What I'm doing (with success so far) is to have integers behave like floats, so that the range [-32767,32767] is mapped to [-1.0,1.0]. Where it stumbles is with larger arguments representing floats greater than 1.0:
inline long times(long a, long b) {
printf("a=%ld b=%ld ",a,b);
a *= b;
printf("a*b=%ld ",a);
a /= 32767l;
printf("a*b/32767=%ld\n",a);
return a;
}
int main(void) {
printf("%ld\n",times(98301l,32767l));
}
What I get as output is:
a=98301 b=32767 a*b=-1073938429 a*b/32767=-32775
-32775
So times(98301,32767) is analogous to 3.0*1.0. This code works perfectly when the arguments to times are less than 32767 (1.0), but none of the intermediate steps with the arguments above should overflow the 64 bits of long.
Any ideas?
long is not necessarily 64 bits. try 'long long' instead.
The type long is not necessarily 64 bits. If you are on the 32 bit architecture (at least on MS Visual c++), the long type is 32 bits. Check it out with sizeof (long). There is also the long long data type that may help.
You probably have 32-bit longs. Try using long long instead.
98301 * 32767 = 3221028867, while a 32-bit long overflows at 2147483648
The C standard only guarantees that long will have at least 32 bit (which is actually the case on most 32bit platforms).
If you need 64 bit, use long long. It's guaranteed to hold at least 64 bit.