old Fortran "shared" feature in open() causing open file failure - fortran

I am using a code which is written in very old Fortran language. There are some lines using the shared option in the open() routine. E.g.:
OPEN(UNIT=17,STATUS='OLD',FORM='UNFORMATTED',READONLY,SHARED)
The compilation is ok. When running on one machine, it is ok. But later I moved to another machine, it gives the error:
forrtl: Function not implemented
forrtl: severe (30): open failure, unit 17, file
The compiler I use is the Linux version of intel Fortran 14.0.3 on both of the machine. When the shared feature is removed, everything is fine. But, since the code is not written by myself, I would like to keep it if possible. The reason of using this option is that I may the code on many threads at the same time. They may have to access some files at the same time.
Do you have any idea of this?
Is it safe to just remove shared from the code?

Related

C++ Versions, do they auto-detect the version of an exe?

Okay so I know that there are multiple c++ versions. And I dont really know much about the differences between them but my question is:
Lets say i made a c++ application in c++ 11 and sent it off to another computer would it come up with errors from other versions of c++ or will it automatically detect it and run with that version? Or am I getting this wrong and is it defined at compile time? Someone please tell me because I am yet to find a single answer to my question on google.
It depends if you copy the source code to the other machine, and compile it there, or if you compile it on your machine and send the resulting binary to the other computer.
C++ is translated by the compiler to machine code, which runs directly on the processor. Any computer with a compatible processor will understand the machine code, but there is more than that. The program needs to interface with a filesystem, graphic adapters, and so on. This part is typically handled by the operating system, in different ways of course. Even if some of this is abstracted by C++ libraries, the calls to the operating system are different, and specific to it.
A compiled binary for ubuntu will not run on windows, for example, even if both computers have the same processor and hardware.
If you copy the source code to the other machine, and compile it there (or use a cross-compiler), your program should compile and run fine, if you don't use OS-specific features.
The C++ version does matter for compilation, you need a C++11 capable compiler of course if you have C++11 source code, but once the program is compiled, it does not matter any more.
C++ is compiled to machine code, which is then runnable on any computer having that architecture e.g. i386 or x64 (putting processor features like SSE etc. aside).
For Java, to bring a counterexample, it is different. There the code is compiled to a bytecode format, that is machine independent. This bytecodeformat is read/understood by the Java Virtual Machine (JVM). The JVM then has to be available for your architecture and the correct version has to be installed.
Or am I getting this wrong and is it defined at compile time?
This is precisely the idea: The code is compiled, and after that the language version is almost irrelevant. The only possible pitfall would be if a newer C++ version would include a breaking change to the standard C++ library (the library, not the language itself!). However, since the vast majority of that library is template code, it's compiled along with your own code anyway. It's basically baked into your .exe file along with your own code, so it's just as portable as yours. Also, both the C and C++ designers take great care not to break old code; so you can expect even those parts that are provided by the system itself (the standard C library) not to break anything.
So, even though there are things that could break in theory, pure C++ code should run fine on all machines that understand the same .exe format as the machine it was compiled on.

Calling NASM from a C program using system() produces different object code then calling using Bash

I've implemented a reasonably good optimizing compiler (for a toy language), and have come across a rather puzzling scenario. I can take an input file and produce assembly from it, so in that respect the "compiler" is finished. If I take that assembly file assemble it with NASM and link it to my runtime support library with G++ (the runtime needs libstdc++), I get a working executable, with no errors. However, I'd like to be able to compile to an executable in one command, so I added some calls to system to my compiler, passing the EXACT SAME COMMANDS as I was using in bash. When I run the compiler, it seems to assembly properly, but the linking step (again, using g++) fails with an undefined reference to main. Confused, I attempted to link manually (without reassembling, so I was using the object file produced by the NASM run with system, and received the same error. If I reassemble using the new compiler's assembly output, I have no problems, which has led be to believe that NASM is the problem. Like I said, the commands are exactly the same (I literally copied and pasted them just to make sure after the first time). Is this environment variables or something? What's going on?
EDIT:
I manually assembled an object file, again using the same command as the one in the compiler, and I did a vim diff between the two. The compiler-generated one seems to only contain the ELF header.
EDIT 2:
A screenshot of the diff
EDIT 3:
I tried using system to call a Perl script that in turn would call NASM and G++, still no luck.
Fixed it! It was the file not being flushed due to a race condition. Thanks for all the help, much appreciated.

MinGW: Linking with LAPACK and BLAS causes C++ exceptions to become unhandled

The situation is simple, but strange. When i compile my program without the LinearAlgebra.o source (which requires linking to LAPACK), C++ exceptions are caught and handled. When I do not include that compilation unit but still link to the libraries (-llapack -lblas), exceptions are caught and handled. But once I get it in there (the code from it runs just fine), C++ exceptions are no longer handled correctly, and I get Windows crash handler "Program has stopped responding report back to HQ" nonsense.
Here I shed light on what is going on inside this source file. I did keep it pretty simple, but I'm not sure if it's really Kosher.
I suspect it is something about invoking FORTRAN routines which causes C++ exceptions to stop working. But I have no idea how to go about fixing this.
UPDATE:
I am very glad to have found a temporary workaround for this issue: I am using MinGW's gfortran compiler to directly compile the LAPACK and BLAS routines I am currently using.
Linking those object files into my C++ project using -lgfortran with g++ works flawlessly, and my exceptions are still being correctly handled! As a bonus this allows me to only include what LAPACK routines I intend to use, so now I no longer have to link a ~4MB library.
Edit: I think if I statically link a library it only "grabs what it needs" so it being 4MB wouldn't matter in that case.
I have had great results with GotoBLAS2. Running the script included produces a massive 19MB static library optimized for my machine. It works flawlessly by simply linking it. All of my fortran style calls just work.

Mex dynamic memory management issue with std::vector in linked external DLL; Segmentation error

I am trying to create a mex file that interfaces MATLAB with an external C++ library that communicates with some hardware. An imported library and precompiled DLL (.lib and .dll) are provided by the hardware vendor for my version of VC++ and I was able to implement them in C++ without any issue.
However, I ran into segmentation error at run time when the code is written as a mex(compiled with the same version of VC++). After some investigation with the VC++ debugger, the likely culprit seems to be the fact that one of the external dll functions returns the data type std::vector, and probably tries to dynamically allocate memory for the vector container somewhere inside the function. I know that if I use std::vector in my own mex function, everything works fine, but I suspect that the mex header itself wraps the std::vector container in my own code for memory management(?) as required for all dynamically allocated memory in mex codes, whereas it can't do the same for the pre-compiled .dll.
Now the question is: since I cannot modify the external .dll file and have no access to its source files, are there any ways to work with this external dll such that the dynamic memory becomes managed by MATLAB(perhaps a wrapper of some sort..?)...and thereby avoid the segmentation error and return the correct data? Or if my analysis is wrong please correct me too!
Please let me know if there are any ideas or hacks, thanks!
My system: Windows 7 SP1 32 bit, MATLAB 2009b, Visual C++ 2008 Pro.
I also posted the same question at:
http://www.mathworks.com/matlabcentral/answers/9294-mex-dynamic-memory-management-issue-with-std-vector-in-linked-external-dll-segmentation-error
.You also can share your insights there if you have an account, thanks!
Thanks everyone for the answers and comments. I was able to resolve the issue with some help from the friendly folks at MathWorks.
From the original post at http://www.mathworks.com/matlabcentral/answers/9294-mex-dynamic-memory-management-issue-with-std-vector-in-linked-external-dll-segmentation-error :
You are probably seeing an incompatibility between the stl library and or compiler options used by your pre-compiled dll and those used by MATLAB and the MEX command. MATLAB 2009b was built with MSVC 2005.
You may be able to fix the problem by changing the options used by mex or by building your mex file directly with MSVC. One example of an option that may effect things is SECURE_SCL=0. I would start by building your test program with the options MATLAB is using to find the problematic option then try removing that option when building the mex file.
Because of this sort of incompatibility use of stl objects in the api's of third party compiled libraries is usually a bad idea.
Following his advice, I removed the SECURE_SCL=0 option from the mex options file at
C:\Users\(username)\AppData\Roaming\MathWorks\MATLAB\R2009b\mexopts.bat
Then recompiled the mex file, now everything works like a charm - the function is returning the correct data and segmentation error no longer occurs.
The MEX API doesn't do anything special with STL containers, since they cannot be passed between MATLAB and a MEX-function (the only non-primitive data type that can do that is the mxArray). It's basically up to the MEX-function to make sure that the memory used by the STL container is handled properly; MATLAB doesn't track it.
Passing a std::vector across a DLL boundary is somewhat tricky. I'd assume the vendor would be aware of this, and provide you with an appropriate header file with the correct declspecs and such, but in case they didn't, you might want to refer to this Microsoft support link to read more about what is required.

Same binary code on Windows and Linux (x86)

I want to compile a bunch of C++ files into raw machine code and the run it with a platform-dependent starter written in C. Something like
fread(buffer, 1, len, file);
a=((*int(*)(int))buffer)(b);
How can I tell g++ to output raw code?
Will function calls work? How can I make it work?
I think the calling conventions of Linux and Windows differ. Is this a problem? How can I solve it?
EDIT: I know that PE and ELF prevent the DIRECT starting of the executable. But that's what I have the starter for.
There is one (relatively) simple way of achieving some of this, and that's called "position independent code". See your compiler documentation for this.
Meaning you can compile some sources into a binary which will execute no matter where in the address space you place it. If you have such a piece of x86 binary code in a file and mmap() it (or the Windows equivalent) it is possible to invoke it from both Linux and Windows.
Limitations already mentioned are of course still present - namely, the binary code must restrict itself to using a calling convention that's identical on both platforms / can be represented on both platforms (for 32bit x86, that'd be passing args on the stack and returning values in EAX), and of course the code must be fully self-contained - no DLL function calls as resolving these is system dependent, no system calls either.
I.e.:
You need position-independent code
You must create self-contained code without any external dependencies
You must extract the machine code from the object file.
Then mmap() that file, initialize a function pointer, and (*myblob)(someArgs) may do.
If you're using gcc, the -ffreestanding -nostdinc -fPIC options should give you most of what you want regarding the first two, then use objdump to extract the binary blob from the ELF object file afterwards.
Theoretically, some of this is achievable. However there are so many gotchas along the way that it's not really a practical solution for anything.
System call formats are totally incompatible
DEP will prevent data executing as code
Memory layouts are different
You need to effectively dynamically 'relink' the code before you can run it.
.. and so forth...
The same executable cannot be run on both Windows and Linux.
You write your code platform independently (STL, Boost & Qt can help with this), then compile in G++ on Linux to output a linux-binary, and similarly on a compiler on the windows platform.
EDIT: Also, perhaps these two posts might help you:
One
Two
Why don't you take a look at wine? It's for using windows executables on Linux. Another solution for that is using Java or .NET bytecode.
You can run .NET executables on Linux (requires mono runtime)
Also have a look at Agner's objconv (disassembling, converting PE executable to ELF etc.)
http://www.agner.org/optimize/#objconv
Someone actually figured this out. It’s called αcτµαlly pδrταblε εxεcµταblε (APE) and you use the Cosmopolitan C library. The gist is that there’s a way to cause Windows PE executable headers to be ignored and treated as a shell script. Same goes for MacOS allowing you to define a single executable. Additionally, they also figured out how to smuggle ZIP into it so that it can incrementally compress the various sections of the file / decompress on run.
https://justine.lol/ape.html
https://github.com/jart/cosmopolitan
Example of a single identical Lua binary running on Linux and Windows:
https://ahgamut.github.io/2021/02/27/ape-cosmo/
Doing such a thing would be rather complicated. It isn't just a matter of the cpu commands being issued, the compiler has dependencies on many libraries that will be linked into the code. Those libraries will have to match at run-time or it won't work.
For example, the STL library is a series of templates and library functions. The compiler will inline some constructs and call the library for others. It'd have to be the exact same library to work.
Now, in theory you could avoid using any library and just write in fundamentals, but even there the compiler may make assumptions about how they work, what type of data alignment is involved, calling convention, etc.
Don't get me wrong, it can work. Look at the WINE project and other native drivers from windows being used on Linux. I'm just saying it isn't something you can quickly and easily do.
Far better would be to recompile on each platform.
That is achievable only if you have WINE available on your Linux system. Otherwise, the difference in the executable file format will prevent you from running Windows code on Linux.