Profling R,C++,Fortran Mix - c++

I am on an odyssey trying to profile an R package for CPU time that contains a mix of R, C++ and Fortran code. I have tried lots of things that all failed. Has anybody done this before and was successful? No matter with what OS. I have OS X as native OS and Ubuntu and Windows 7 in a virtual machine. If necessary, I can also natively install any OS.

On Mac OS X Instruments work fine. However, it is important to make sure that the -fomit-frame-pointer is not turned on. For details see this question and the accepted answer. Also, it is important to compile the packages on the same machine that will be used for profiling to get the source code and not only the disassembly in Instruments.

Related

What components of a machine affect the machine code produced given a C++ file input? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 months ago.
Improve this question
I wrote this question What affects generated machine code at each step of the compilation process? and realized that is was much too broad. So I will try to ask each component of it in a different question.
The first question I will ask is, given an arbitrary C++ file what affects the resulting executable binary file it produces? So far I understand each of the following play a role
The CPU architecture like x86_64, ARM64, Power PC, Microblaze, ect.
The kernel of a machine like Linux kernel v5.18, v5.17, a Windows Kernel version, a Mac kernel version ect.
The operating system such as Debian, CentOS, Windows 7, Windows 10, Mac OS X Mountain Lion, Mac OS X Sierra.
Not sure what the OS changes on top of the kernel changes.
Finally the tools used to compile, assembly and link. Things like GCC, Clang, Visual Studio (VS), GNU assembler, GNU compiler, VS Compiler, VS linker, ect.
So the 2 questions I have from this are
Is there some other component that I left out that affects how the final executable looks like?
And does the operating system play a role in affecting the final executable machine code? Because I thought it would all be due to the kernel.
The main one I think you're missing is the Application Binary Interface.  Part of the ABI is the calling convention, which determines certain properties of register usage and parameter passing, so these directly affect the generated machine code.
The kernel has a loader, and that loader works with file formats, like ELF or PE.  These influence the machine code by determining the layout of the process and how the program's code & data are loaded into memory, and how the machine code instructions access data and other code.  Some environments want position independent code, for example, which affects some of the machine code instructions.
The CPU architecture like x86_64, ARM64, Power PC, Microblaze, ect.
Yes.  The instruction set architecture defines the available instructions to use, which in turn define the available CPU registers and how they can be used as well as and sizes of things like pointers.
The kernel of a machine like Linux kernel v5.18, v5.17, a Windows Kernel version, a Mac kernel version ect.
Not really.  The operating system choice influences the ABI, which is very relevant, though.
The operating system such as Debian, CentOS, Windows 7, Windows 10, Mac OS X Mountain Lion, Mac OS X Sierra.
The operating system usually dictates the ABI, which is important.
the tools used to compile, assembly and link. Things like GCC, Clang, Visual Studio (VS), GNU assembler, GNU compiler, VS Compiler, VS linker, ect.
Of course, different tools produce some different machine code, sometimes the differences are equivalent, though some tools produce better machine code than others for some inputs.

Will console applications compiled on Windows work on Linux and MAC OS X using DOSBox?

The doubt
I have written some code in Microsoft Visual C++ 2010 Express as so:
#include<iostream>
int main()
{
system("cls");
char name[20];
cout<<"\nEnter your name:";
cin.getline(name,20);
system("pause");
cout<<"\nYour name is:"<<name;
system("pause");
return 0;
}
And now I have compiled it and sent it to a friend on a Linux machine. he downloads the DOSBox software and then runs this program.
THE DOUBT
Will it run as it does on my machine or will this create any problem?
why I am asking this?
I recently downloaded a linux live cd and ran it on my machine. I can't install it on this machine as it is a shared PC. Anyway, I typed cls into the terminal and there was no response. I typed pause again there was no response. So it set me wondering if the command "cls" that i am passing to the system in the above code will really have any effect on a linux machine.
There are a few reasons why this program won't work on other machines - I will summarise the two main ones:
You use system instructions which are not supported by other operating systems. If you attempt the run these instructions on a different OS, the OS will complain that it doesn't understand them and the program will crash.
(And probably more importantly,) the Windows executable you have created is a Windows .exe file which is Microsoft's Portable Executable format. Linux can only read executables in ELF format, and Mac OS X uses the Mach-O format.
These two points are worth discussion in their own right, and as Joachim pointed out in the comments, the WINE emulator is quite good at emulating a windows environment on Linux, so this may be an option for program compatibility.
EDIT: I should add here that Point 1 assumes that Point 2 has been overcome. Point 2 is the reason executables on one OS just plain "don't work" on other operating systems.
Response to comment:
Generally, yes, ELF files are the standard for all Linux distros (there may be a few rare exceptions). Similarly, PE files are the standard for all Windows versions. Provided you have a relatively up to date CPU, then if you compile an executable on one Linux distro, then it should work on others.
The exception here is, if you compile the program on a machine with a recent CPU, and wish to run it on a machine with a very old CPU, the old CPU may not support some of the instructions that the compiler creates. However, these days just compiling a program with the default settings generally works on all (Intel) CPUs. If you know for a fact that your target machine uses a very different or older CPU, you can add the -march=... compiler option so the compiler generates instructions that will definitely work on the target machine.
Finally, DOSBox is not a Windows Emulator, it is a DOS emulator. The two systems, despite their history, are quite different. DOSBox is not designed to run native Windows applications, it is designed to run native DOS applications (most of which are abandonware these days). If you'd like to run DOS programs on Linux such as Dangerous Dave (one of my nostalgic favourites), then you can. However, if you wish to run Windows applications, you will need an emulator designed for this purpose, such as WINE.
For reference, DOS uses the obsolete MZ Executable format.
pause and cls most likely will not work directly in other OSes because these are Windows/DOS-specific commands.
If you remove the DOS-specific commands and make the program generic, then the EXE file built in Windows can most probably be executed in Linux or MacOS through Wine. Please see http://www.winehq.org/about/ and http://wiki.winehq.org/MacOSX . I'm saying "most probably" because you still need to try it out to see if there are problems.
If you run your EXE executable inside a virtualized environment running Windows in it like Virtual Box, then it will work.
On Linux, the command to clear the screen is clear. Is that what you're really intending to do?

Should I compile D program on Linux for windows?

Is there any way to compile a D program under linux for a windows operating system?
The easiest way to do this would probably be to run the windows version of dmd under wine. You could set up a cross compiler, but that'll be a lot of extra hassle for the same effect.
Answer to your first question ("Should I ..."): It depends mainly on your development environment. If you work exclusively on Linux, then it is a good idea to setup cross-compiler and build Windows applications on Linux.
Answer to the second question ("Is there..."): Yes, there is - by using a cross-compiler capable of targeting Windows platform.
Both GDC and LDC can be built so they can target Windows 32bit or 64bit straight from your Linux box. You can find hundreds of resources on this topic on the Internet. Writing a guide how to build GDC cross-compiler is time-consuming, but if you somehow fail to do it on your own I will write a simple step-by-step guide how to do this.

Developing C++ applications to run on embedded Linux setup

I am required to write a C++ application to run on an embedded Linux setup (DMP Vortex86DX processor). The vendor provides a minimal linux installation image that can be installed to the board and contains appropriate hardware drivers. My question is motivated by the answer to my previous question about writing Linux software on a particular kernel to run on a different kernel . I don't really know where to start when it comes to writing the software with regards to ensuring compatibility.
My instinctive approach would be to install the same versions of g++ on the embedded device and on my desktop development machine, write the application on the dev maching, copy to the board and compile it there. This seems madness though and I find it hard to believe that this is how embedded software is developed. With regards to the answer to my previous question, is there a way I can simply build on my desktop but use the version of glibc that exists on the embedded device - if so how can enforce linkage to a specific version? Or is it possible to build everything statically so that the application doesn't link to anything dynamically (I doubt this is possible).
I am a total novice to embedded development, and foresee months of frustration unless I can get hold of some good advice or resources. Any pointers or suggestion of where to start will be very gratefully received no matter how simple or trivial they seem - I really am starting at the very bottom with regards to embedded stuff.
OK, given the fact that the Vortex86SX/DX/MX claims to be x86 compatible, a small set of compiler switches should enable you to compile code for your target machine: -m32 to ensure 32bit code, and no -march switch targeting a specific CPU.
Then you'll need to link your code. As long as you don't use anything fancy, but simple established glibc functions, I'd expect the ABI to be the same on your development machine and the embedded system. In other words, you compile against your host libraries, copy the binary to the embedded system, and it should simply run using the libraries available there.
If X-Linux were to use some other libc, like uclibc or similar, then you'd need a cross compiler on your host. I have little experience with Ubuntu in that regard, but I know that the sys-devel/crossdev package for Gentoo linux makes generation of cross-compilers very easy. This can be both for different architectures (not needed in your case) and different libraries (like e.g. uclibc).
I'd say simply give copying the binaries a try, and report back if you encounter any problems there.

How to Compile for OS X in Linux or Windows?

I would like to port my C/C++ apps to OS X.
I don't have a Mac, but I have Linux and Windows. Is there any tool for this?
For Linux, there is a prebuilt GCC cross-compiler (from publicly available Apple's modified GCC sources).
https://launchpad.net/~flosoft/+archive/cross-apple
Update for 2015
After so many years, the industry-standard IDE now supports OSX/iOS/Android.
http://channel9.msdn.com/Events/Visual-Studio/Connect-event-2014/311
Embarcadero's RadStudio also supports building OSX/iOS/Android apps on Windows.
This answer by Thomas also provides a cross-compilation tool.
For all these options you still need a real mac/i-device to test the application.
I have created a project called OSXCross which aims to target OS X (10.4-10.9) from Linux.
It currently supports clang 3.2 up to 3.8 (trunk) (you can use your dist's clang).
In addition you can build an up-to-date vanilla GCC as well (4.6+).
LTO works as well, for both, clang and GCC.
Currently using cctools-870 with ld64-242.
https://github.com/tpoechtrager/osxcross
There appears to be some scripts that have been written to help get you set up cross compiling for the Mac; I can't say how good they are, or how applicable to your project. In the documentation, they refer to these instructions for cross-compiling for 10.4, and these ones for cross compiling for 10.5; those instructions may be more helpful than the script, depending on how well the script fits your needs.
If your program is free or open source software, then you may wish instead to create a MacPorts portfile (documentation here), and allow your users to build your program using MacPorts; that is generally the preferred way to install portable free or open source software on Mac OS X. MacPorts has been known to run on Linux in the past, so it may be possible to develop and test your Portfile on Linux (though it will obviously need to be tested on a Mac).
Get "VMware Player"
Get "Mac OS X vm image"
Compile/Debug/Integrate-and-test your code on the new OS to make sure everything works
When you are trying to get something working on multiple platforms you absolutely must compile/run/integrate/test on the intended platform. You can not just compile/run on one platform and then say "oh it should work the same on the other platform".
Even with the a really good cross-platform language like Java you will run into problems where it won't work exactly the same on the other platform.
The only way I have found that respects my time/productivity/ability-to-rapidly iterate on multiple platforms is to use a VM of the other platforms.
There are other solutions like dual-boot and ones that I haven't mentioned but I find that they don't respect my productivity/time.
Take dual-booting as an example:
I make a change on OS 1
reboot into OS 2
forget something on OS 1
reboot into OS 1
make a change on OS 1
reboot into OS 2 ... AGAIN...
BAM there goes 30 minutes of my time and I haven't done anything productive.
You would need a toolchain to cross compile for mach-o, but even if you had that, you won't have the Apple libraries around to develop with. I'm not sure how you would be able to port without them, unfortunately.
Apple development is a strange beast unto itself. OS X uses a port of GCC with some modifications to make it 'appley'. In theory, it's possible to the the sources to the Apple GCC and toolchain as well as the Apple kernel and library headers and build a cross compiler on your Windows machine.
Why you'd want to go down this path is beyond me. You can have a cheap Mac mini from $600. The time you invest getting a cross compiler working right (particularly with a Windows host for Unix tools) will probably cost more than the $600 anyway.
If you're really keen to make your app cross platform look into Qt, wxWidgets or FLTK. All provide cross-platform support with minimal changes to the base code. At least that way all you need to do is find a Mac to compile your app on, and that's not too hard to do if you have some technically minded friends who don't mind giving you SSH access to their Mac.
You will definitely need OS X somehow. Even if you don't own a Mac, you can still try some alternatives.
I found this small documentation on the net:
http://devs.openttd.org/~truebrain/compile-farm/apple-darwin9.txt
This describes exactly what you want. I am interested in it myself, haven't tested it yet though (found it 10 minutes ago). The documentation seems good, though.
You can hire a mac in the cloud from this website. You can hire them from $1, which should be enough (unless you need root access, then you are looking at $49+).
There are a few cross-compiler setups, but nearly all of them are meant for distcc-style distributed compiling. To my knowledge there is no way to directly target the Mac platform without actually having a Mac. The closest you can get without resorting to QT or wxWidgets is OpenStep with GNUStep or similar, but that's not a true Cocoa platform, just very close.
I know this question isn’t very active but answering anyways. Why don’t you try using TransMac, then download the XCode image and do it that way? Or you can use a VM, or Sosumi. You’ll find a video on youtube about sosumi, definitely.
The short answer is kind of. You will need to use a cross-platform library like QT. There are IDE's like QT Creator that will let you develop on one OS and generate Makefiles for others. For more information on cross platform development, check out the cross-platform episodes of this podcast (note that the series isn't over and new episodes appear to come out weekly).
As other answers explain you can probably compile for a Mac on Windows or Linux but you won't be able to test your applications so you should probably spend the $600 for a Mac if you’re doing professional programming, or if you’re working on open-source software find a developer with a Mac who will help you.