Why DMD is not able to compile the following D code snippet? - d

I am learning D and use run.dlang.io for debugging. The following code below runs without issues on run.dlang.io:
import std.stdio;
import std.algorithm;
import std.range;
import std.typecons;
static bool even(Tuple!(ulong, double) a) {
return (a[0] & 1) == 0;
}
void main() {
double[] arr = [31, 22, -3, 44, 51, 26, 47, 58, 19, 10];
auto res1 = arr.enumerate.filter!(even).map!(a => a[1]);
writeln(res1);
}
However, DMD32 v2.088 throws exception while compiling the exact same code dmd temp.d on Windows 10.
Error: function temp.even(Tuple!(ulong, double) a) is not callable using argument types (Tuple!(uint, "index", double, "value"))
While LDC compiler (1.18.0-beta1): (based on DMD v2.088.0 and LLVM 8.0.1) compiles the same file without issues.
run.dlang.io uses 2.087 dmd compiler and somehow it magically works, why doesn't it work on Windows?

On Windows your application is by default built for 32 bit. On OSX and Linux (which is what run.dlang.io is running) it's building by default for 64 bit.
Because of that, the array indices are uint and ulong respectively. In your code you used Tuple!(ulong, double), but on 32 bit it's called with uint indices.
Instead of using ulong/uint you should use size_t for indices, which maps to uint/ulong. This is defined in object.d, which is included by default.
So if you change your function to
static bool even(Tuple!(size_t, double) a) {
return (a[0] & 1) == 0;
}
it will run on both 32 bit and 64 bit.
On Windows you can also test your code with dub by running it with --arch=x86_64 or with the dmd flag -m64 where it should already work without change. I recommend testing your application both on 32 bit and 64 bit always to make sure you are using size_t where needed.

Related

Wrong reinterpretation of the nullptr in a Qt 5.15-based project | C++

Both GCC (version 12.2) and Clang (version 14.0) compilers interpret nullptr as 32-bit integer (int) in some places, and this causes errors.
For example, in the qhashfunctions.h file there is a variant of function called qHash that takes nullptr as a main argument.
Q_DECL_CONST_FUNCTION inline uint qHash(std::nullptr_t, uint seed = 0) Q_DECL_NOTHROW
{
return qHash(reinterpret_cast<quintptr>(nullptr), seed);
}
Any compiler reports that int cannot be casted to quintptr (unsigned long long).
Second example has the same problem. In the std_thread.h file there is the following code:
_M_start_thread(_State_ptr(new _State_impl<_Wrapper>(
std::forward<_Callable>(__f), std::forward<_Args>(__args)...)),
__depend);
}
Earlier in this file __depend is declared as nullptr (auto __depend = nullptr;). This means that function pointer (2nd argument) is nullptr. Any compiler reports that a parameter of type void (*)() cannot be initialised with a value of type int.
What is the solution to the problem?
I use ArchLinux (x86-64) with latest updates and Qt version 5.15.8. Building is done through Qt Creator
I also tried to create another Qt-based project and write the following code in the main function:
unsigned long long i = reinterpret_cast<unsigned long long>(nullptr)
Compilation succeded...

Runtime errror with 100000 recursive calls in C++

I tried to run this code in C++14 on Cygwin and Mingw in Windows 10 but in both I am getting runtime error. But on Ubuntu 16.04 it runs without any problem.
#include <iostream>
using namespace std;
int rec(int n){
if(n == 0) return 0;
return 1 + rec(n-1);
}
int main(){
int k = 123456;
cout << rec(k) << endl;
return 0 ;
}
But if I change the value of k to some number in 10^4 it works even on windows 10, like k = 12345.
What could be the reason behind this strange behavior?
Each recursive function call occupies some space in the stack. Different OS's manage RAM differently, and it's obvious that Windows 10 isn't maintaining a stack that's as big as the one that Ubuntu maintains. Maybe there's a way to tweak stack size but I'm not sure for Windows.
The call stack is indeed limited, and its limit depends upon the computer and the OS (as a rule of thumb only: 1 Megabyte on Windows, 8 Megabytes on Linux).
You could use, on Linux, setrlimit(2) to change the size of the call stack (e.g. with ulimit builtin in your shell).
If you ask your compiler to optimize, e.g. compile with g++ -O3 -fverbose-asm -S, you'll see that rec is no more compiled as a recursive function

Strange bit operation result in C++ / disassembly?

In a C++ function, I have the following variables:
uint32_t buf = 229; //Member variable
int bufSize = 0; //Member variable
static constexpr const uint32_t all1 = ~((uint32_t)0);
And this line of code:
uint32_t result = buf & (all1 >> (32-bufSize ));
Then the value of result is 229, both by console output, or by debugging with gdb. The expected value is of course 0, and any trial to make a minimal example reproducing the problem failed.
So I went to the disassembly and executed it step by step. The bit shift operation is here:
0x44c89e <+0x00b4> 41 d3 e8 shr %cl,%r8d
The debugger registers show that before the instruction, rcx=0x20 and r8=0xFFFFFFFF
After the instruction, we still have r8=0xFFFFFFFF
I have a very poor knowledge of x86-64 assembly, but this instruction is supposed to be an unsigned shift, so why the heck isn't the result 0?
I am using mingw-gcc x86-64 4.9.1
Invoke -Wall on compiler and you will have -Wshift-count-overflow problem as you are using 32 for shifting which is size of unsigned int. Now you can do one thing just for knowing about it. Change 32 to 31 and compile. Then compare the assembly generated and you will know what went wrong.
The easiest fix for you would be use long data type for all1 and result.

64 Bit Android compiling but has problems at runtime

I have an Android App with a big C++ library which runs smoothly when compiled for
APP_ABI := armeabi-v7a //32 bit
but has issues when compiled with
APP_ABI := arm64-v8a //64 bit
All JNI jint variables have been converted to jlong variables according to the NDK docs.
My problem is that for some reason I can not compare variables of any data type other than int when they are assigned from a function varible.
This works:
unsigned long a = 200;
unsigned long b = 200;
if(a == b) {
LOGE("got here"); //This works
}
This fails:
void myClass::MyFunction(unsigned long c, unsigned long d) {
if(c == d) {
LOGE("got here"); //This does NOT work
}
}
Mind you, both above functions work in the 32 bit build. The values that I read from the variables c and d are identical when logged.
Interestingly this works in the 64 bit version (int variables):
void myClass::MyFunction(int e, int f) {
if(e == f) {
LOGE("got here"); //This works
}
}
Only integers can be compared. I have tried long, double, long long, unsigned and signed...
My NDK version is 10d (latest). I have tried with both, the 32 and 64 bit versions of the NDK and the result is the same. My development platform is a Win7 64 bit desktop.
Am I missing somethign essential?
Found the solution to my problems:
Data type sizes are different on 64 bit so my library was expecting 4 byte but longs are 8 bytes (4 bytes when compiled for 32 bit). Typecasting them to uint32_t did the trick.
Normal int worked because thats 4 bytes by default.

Python ints passed into C++ functions as longs using ctypes get truncated

I'm developing on a Mac using MacOSX 10.8.2 with the latest xcode and the stock python interpreter.
Here's some code I put into fails.cpp
#include <iostream>
using namespace std;
extern "C" {
void mysort(long *data, long data2) {
cout << hex << *data << ' ' << data2 << endl;
}
}
Here's some code to call it from python that I put in fails.py:
import ctypes
sort_dll = ctypes.CDLL("fails.dylib")
mysort = ctypes.CFUNCTYPE(None, ctypes.POINTER(ctypes.c_long), ctypes.c_long)(sort_dll.mysort)
a = ctypes.c_long(0x987654321)
mysort(ctypes.byref(a), 0x123456789)
I compile and run with
c++ -arch x86_64 -o fails.o -c fails.cpp && g++ -o fails.dylib -dynamiclib fails.o && python fails.py
The result is:
987654321 23456789
Why is the 64-bit integer passed by value being truncated to 32 bits? Surprisingly, the pointer to a 64-bit long isn't truncated.
I suspect it is because the Python ctypes library has decided that c_long is 32-bits in size. There are some hints in the docs:
Represents the C signed int datatype. The constructor accepts an optional integer initializer; no overflow checking is done. On platforms where sizeof(int) == sizeof(long) it is an alias to c_long.
This code will tell you how big c_long actually is:
import ctypes
print ctypes.sizeof(ctypes.c_long())
The value the pointer references wouldn't be truncated since ctypes only has to marshall the pointer itself. It's possible the pointer is being truncated, but it doesn't matter as the high bits are all zero anyway. It's also possible that ctypes decided that ctypes.POINTER is 64-bit. You can find out by modifying the above example just slightly.