First: I do not know how to create an MVCE of this problem. I realize that is a big no no for this site, but I frankly there is a lot of expertise here and I don't know a better place to ask this question. Maybe the answer is, post this question <insert other site here>.
The question: any thoughts as to what is going on here, and how can I probe this problem?
Anyway, code base is >10K lines of fortran that is also linking in an open source C++ library, nanort. So its a combined in house code of Fortran and C++ with a lot going on.
Somewhere in the code I have to read in a binary file in C++ and parse it. The problem I am running into is that 10% of the time, the function std::filesystem::exists is telling me the file does not exist, even though it does. In fact, the fortran inquire routine tells me it does exist in the same execution of the program. Furthermore, at the beginning of the program, the std::filesystem::exists routine tells me it does exist.
So here's that layed out in a simple text diagram
program starts
fortran calls C++ -> std::filesystem::exists reports that the file exists
...
many other things happen
...
fortran calls C++ -> std::filesystem::exists reports that the file does not exists and returns to fortran with an error flag
the fortran inquire function reports that the file does in fact exist
Remember, this only happens 10% of the time. The other 90% of the time the program runs fine (as far as I can tell).
System Info:
Mac OSX Big Sur
g++11, with -std=c++17 and -O3
gfortran with -fbounds-check and -O3
Going to answer this one even though it's a little embarrassing.
When passing a fortran character array to a C void function, you pass it as a pointer to a character array. When doing so, you need to make sure that your character array has a null termination at the point where you want it.
Although I was creating a copy of the filename with a null termination, I was passing the non-null terminated string to C++ by mistake. Given the undefined bits of the program, most of the time this was actually succeeding anyway, but sometimes it was not and I was getting a non-null terminated string for a filename that the system was telling me did not exist.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I know that the inline asm exists, but is it also possible to execute machine code from a file during RUNTIME?
Would i need to write my own interpreter?
I'm using the GNU C++ compiler with c++ 14 enabled, on Windows 7.
Thanks for reading.
With your rephrasing into machine code, this question starts taking a more reasonable shape.
A short answer: Yes, you can run machine code from within your application.
A longer answer is - it's complicated.
Essentially, any string of bits and bytes in memory can be executed, given some conditions are met, such as the data being legal machine instructions (Otherwise the processor will invoke the illegal instruction exception and the OS will terminate your program) and that the memory page into which the data is loaded is marked with executable permissions.
Having said that, the conditions required for that machine code to actually run correctly and do what you expect it to do, is significantly harder, and have to do with understanding of Virtual Memory, Dynamic Loaders and Dynamic Linkers.
To bluntly answer your question, for a POSIX compliant environment at the least, you could always use the mmap system call to map a file into memory with PROT_EXEC permissions and jump into that memory space hoping for the best.
Naturally, any symbols that code would be expecting to find in memory aren't likely to be there, and the code was better compiled as PIC (Position Independent Code) but this roughly answers your question with a YES.
For better control, you'd usually prefer to use a more standard method, such as compiling your extra code as a shared object (Dynamic Link Library, DLL in Windows) and loading it into your application with dlopen while using dlsym to access symbols within it. It still allows you to load machine code from the disk into your application, but it also stores the machine code in a well formatted, standard way, which allows the dynamic linker to properly load and link the new code segment into your application, reducing unexpected behavior.
In neither of these cases will you need an interpreter, but neither is it a matter of language or compiler used - this is OS specific functionality, and will behave quite differently on Windows.
As a different approach, you could consider using the #include directive to import an external chunk of assembly code into your work while you're still working on it and properly incorporate it in compile time, which will yield far more deterministic results.
Edit:
For windows, the parallel for mmap is CreateFileMapping
dlopen is LoadLibrary
Not a Windows expert, sorry...
Let us distinguish between "assembler code"/assembly code (which is what this question initially asked about) and machine code (after one of the edits).
Anything you might describe as "assembler code" (or more usually "assembly code") but not machine code (i.e. anything not being actual, binary, executable, machine code) cannot be "executed". You can only read it into what I would call an "assembly-code-interpreter" and have it processed. I do not know of any such a program.
Alternatively, you can have it processed at runtime by a build process and execute the resulting executable. That however seems not to be what you are asking about.
Note that this does not mean that you can execute any machine code you might find in a file on your disk. It needs to be for the right, same platform and be supported by the appropriate runtime environment. That is applicable to executeables created for your machine or compatibles, e.g. the result of a built.
Note that I understand "assembler code" ("assembly code") to mean source code in assembly language, which is a (probably the most basic) representation of programs in (not really) human eye readable form. (As immortal has commented, an assembler is the program to process assembly code into machine code.) Opcode mnemonics are used, e.g. cmp r1, r2 for comparing two registers. That string of characters however is guaranteed not to make any sense when trying to execute it straight forward. (OK, strictly speaking I should say "almost guaranteed"...)
Machine code which is appropriatly made for your environment, including a loader, can be executed from a file. Any operating system will support you doing that, most will even provide a GUI for doing that. (I notice this sounds somewhat cynical, sorry, not meant to be.) Windows for example will execute an executable if you double-click its icon in the windows explorer.
An alternative to such executable programs are libraries. Especially the dynamic link libraries are probably quite close to what you are thinking of. They are very similar, in needing to be targeted at your environment. Then they can (usually partially) be executed from a linked program, via agreed calling mechanisms. Those mechanisms in turn ensure that the code is executed in a matching environment, including being able to return results.
I am facing a rather peculiar issue: I have a Qt C++ application that used to work fine. Now, suddenly I cannot start it anymore. No error is thrown, no nothing.
Some more information:
Last line of output when application is started in debug mode with Visual Studio 2012:
The program '[4456] App.exe' has exited with code -1 (0xffffffff).
Actual application code (= first line in main()) is never called or at least no breakpoints are triggered, so debugging is not possible.
The executable process for a few seconds appears in the process list and then disappears again.
Win 7 x64 with latest Windows updates.
The issues simultaneously appeared on two separate machines.
Application was originally built with Qt 5.2.1. Today I test-wise switched to Qt 5.4.1. But as expected no change.
No changes to source code were made. The issue also applies to existing builds of the application.
Running DependencyWalker did not yield anything of interest from my point of view.
I am flat out of ideas. Any pointers on what to try or look at? How can an executable suddenly stop working at all with no error?
I eventually found the reason for this behavior...sort of. The coding (e. g. my singletons) were never the problem (as I expected since the code always worked). Instead an external library (SAP RFC SDK) caused the troubles.
This library depends on the ICU Unicode libraries and apparently specific versions at that. Since I wasn't aware of that fact, I only had the ICU libraries that my currently used Qt version needs in my application directory. The ICU libraries for the SAP RFC SDK must have been loaded from a standard windows path so far.
In the end some software changes (Windows updates, manual application uninstalls, etc.) must have removed those libraries which resulted in that described silent fail. Simply copying the required ICU library version DLLs into my application folder, solved the issue.
The only thing I am not quite sure about, is why this was not visible when tracing the loaded DLLs via DependencyWalker.
"Actual application code (= first line in main()) is never called. So debugging is not possible."
You probably have some static storage initialization failing, that's applied before main() is called.
Do you use any interdependent singletons in your code? Consolidate them to a single singleton if so (remember, there shouldn't be more than one singleton).
Also note, debugging still is possible well for such situation, the trap is ,- for such case as described in my answer -, main()'s bodies' first line is set as the first break point as default, when you start up your program in the debugger.
Nothing hinders you to set breakpoints, that are hit before starting up the code reaches main() actually.
As for your clarification from comments:
"I do use a few singletons ..."
As mentioned above, if you are really sure you need to use a singleton, use actually a single one.
Otherwise you may end up, struggling with undefined order of initialization of static storage.
Anyway, it doesn't matter that much if static storage data depends on each other, provide a single access point to it throughout your code, to avoid cluttering it with heavy coupling to a variety of instances.
Coupling with a single instance, makes it easier to refactor the code to go with an interface, if it turns out singleton wasn't one.
I know many have asked this question before, but as far as I can see, there's no clear answer that helps C++ beginners. So, here's my question (or request if you like),
Say I'm writing a C++ code using Xcode or any text editor, and I want to use some of the tools provided in another C++ program. For instance, an executable. So, how can I call that executable file in my code?
Also, can I exploit other functions/objects/classes provided in a C++ program and use them in my C++ code via this calling technique? Or is it just executables that I can call?
I hope someone could provide a clear answer that beginners can absorb.. :p
So, how can I call that executable file in my code?
The easiest way is to use system(). For example, if the executable is called tool, then:
system( "tool" );
However, there are a lot of caveats with this technique. This call just asks the operating system to do something, but each operating system can understand or answer the same command differently.
For example:
system( "pause" );
...will work in Windows, stopping the exectuion, but not in other operating systems. Also, the rules regarding spaces inside the path to the file are different. Finally, even the separator bar can be different ('\' for windows only).
And can I also exploit other functions/objects/classes... from a c++
and use them in my c++ code via this calling technique?
Not really. If you want to use clases or functions created by others, you will have to get the source code for them and compile them with your program. This is probably one of the easiest ways to do it, provided that source code is small enough.
Many times, people creates libraries, which are collections of useful classes and/or functions. If the library is distributed in binary form, then you'll need the dll file (or equivalent for other OS's), and a header file describing the classes and functions provided y the library. This is a rich source of frustration for C++ programmers, since even libraries created with different compilers in the same operating system are potentially incompatible. That's why many times libraries are distributed in source code form, with a list of instructions (a makefile or even worse) to obtain a binary version in a single file, and a header file, as described before.
This is because the C++ standard does not the low level stuff that happens inside a compiler. There are lots of implementation details that were freely left for compiler vendors to do as they wanted, possibly trying to achieve better performance. This unfortunately means that it is difficult to distribute a simple library.
You can call another program easily - this will start an entirely separate copy of the program. See the system() or exec() family of calls.
This is common in unix where there are lots of small programs which take an input stream of text, do something and write the output to the next program. Using these you could sort or search a set of data without having to write any more code.
On windows it's easy to start the default application for a file automatically, so you could write a pdf file and start the default app for viewing a PDF. What is harder on Windows is to control a separate giu program - unless the program has deliberately written to allow remote control (eg with com/ole on windows) then you can't control anything the user does in that program.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
So I found out that C(++) programs actually don't compile to plain "binary" (I may have gotten some things wrong here, in that case I'm sorry :D) but to a range of things (symbol table, os-related stuff,...) but...
Does assembler "compile" to pure binary? That means no extra stuff besides resources like predefined strings, etc.
If C compiles to something else than plain binary, how can that small assembler bootloader just copy the instructions from the HDD to memory and execute them? I mean if the OS kernel, which is probably written in C, compiles to something different than plain binary - how does the bootloader handle it?
edit: I know that assembler doesn't "compile" because it only has your machine's instruction set - I didn't find a good word for what assembler "assembles" to. If you have one, leave it here as comment and I'll change it.
C typically compiles to assembler, just because that makes life easy for the poor compiler writer.
Assembly code always assembles (not "compiles") to relocatable object code. You can think of this as binary machine code and binary data, but with lots of decoration and metadata. The key parts are:
Code and data appear in named "sections".
Relocatable object files may include definitions of labels, which refer to locations within the sections.
Relocatable object files may include "holes" that are to be filled with the values of labels defined elsewhere. The official name for such a hole is a relocation entry.
For example, if you compile and assemble (but don't link) this program
int main () { printf("Hello, world\n"); }
you are likely to wind up with a relocatable object file with
A text section containing the machine code for main
A label definition for main which points to the beginning of the text section
A rodata (read-only data) section containing the bytes of the string literal "Hello, world\n"
A relocation entry that depends on printf and that points to a "hole" in a call instruction in the middle of a text section.
If you are on a Unix system a relocatable object file is generally called a .o file, as in hello.o, and you can explore the label definitions and uses with a simple tool called nm, and you can get more detailed information from a somewhat more complicated tool called objdump.
I teach a class that covers these topics, and I have students write an assembler and linker, which takes a couple of weeks, but when they've done that most of them have a pretty good handle on relocatable object code. It's not such an easy thing.
Let's take a C program.
When you run gcc, clang, or 'cl' on the c program, it will go through these stages:
Preprocessor (#include, #ifdef, trigraph analysis, encoding translations, comment management, macros...) including lexing into preprocessor tokens and eventually resulting in flat text for input to the compiler proper.
Lexical analysis (producing tokens and lexical errors).
Syntactical analysis (producing a parse tree and syntactical errors).
Semantic analysis (producing a symbol table, scoping information and scoping/typing errors) Also data-flow, transforming the program logic into an "intermediate representation" that the optimizer can work with. (Often an SSA). clang/LLVM uses LLVM-IR, gcc uses GIMPLE then RTL.
Optimization of the program logic, including constant propagation, inlining, hoisting invariants out of loops, auto-vectorization, and many many other things. (Most of the code for a widely-used modern compiler is optimization passes.) Transforming through intermediate representations is just part of how some compilers work, making it impossible / meaningless to "disable all optimizations"
Outputing into assembly source (or another intermediate format like .NET IL bytecode)
Assembling of the assembly into some binary object format.
Linking of the assembly into whatever static libraries are needed, as well as relocating it if needed.
Output of final executable in elf, PE/coff, MachO64, or whatever other format
In practice, some of these steps may be done at the same time, but this is the logical order. Most compilers have options to stop after any given step (e.g. preprocess or asm), including dumping internal representation between optimization passes for open-source compilers like GCC. (-ftree-dump-...)
Note that there's a 'container' of elf or coff format around the actual executable binary, unless it's a DOS .com executable
You will find that a book on compilers(I recommend the Dragon book, the standard introductory book in the field) will have all the information you need and more.
As Marco commented, linking and loading is a large area and the Dragon book more or less stops at the output of the executable binary. To actually go from there to running on an operating system is a decently complex process, which Levine in Linkers and Loaders covers.
I've wiki'd this answer to let people tweak any errors/add information.
There are different phases in translating C++ into a binary executable. The language specification does not explicitly state the translation phases. However, I will describe the common translation phases.
Source C++ To Assembly or Itermediate Language
Some compilers actually translate the C++ code into an assembly language or an intermediate language. This is not a required phase, but helpful in debugging and optimizations.
Assembly To Object Code
The next common step is to translate Assembly language into an Object code. The object code contains assembly code with relative addresses and open references to external subroutines (methods or functions). In general, the translator puts in as much information into an object file as it can, everything else is unresolved.
Linking Object Code(s)
The linking phase combines one or more object codes, resolves references and eliminates duplicate subroutines. The final output is an executable file. This file contains information for the operating system and relative addresses.
Executing Binary Files
The Operating System loads the executable file, usually from a hard drive, and places it into memory. The OS may convert relative addresses into physical locations. The OS may also prepare resources (such as DLLs and GUI widgets) that are required by the executable (which may be stated in the Executable file).
Compiling Directly To Binary
Some compilers, such as the ones used in Embedded Systems, have the capability to compile from C++ directly to an executable binary code. This code will have physical addresses instead of relative address and not require an OS to load.
Advantages
One of the advantages of these phases is that C++ programs can be broken into pieces, compiled individually and linked at a later time. They can even be linked with pieces from other developers (a.k.a. libraries). This allows developers to only compiler pieces in development and link in pieces that are already validated. In general, the translation from C++ to object is the time consuming part of the process. Also, a person doesn't want to wait for all the phases to complete when there is an error in the source code.
Keep an open mind and always expect the Third Alternative (Option).
To answer your questions, please note that this is subjective as there are different processors, different platforms, different assemblers and C compilers, in this case, I will talk about the Intel x86 platform.
Assemblers do not usually assemble to pure / flat binary (raw machine code), instead usually to a file defined with segments such as data, text and bss to name but a few; this is called an object file. The Linker steps in and adjusts the segments to make it executable, that is, ready to run. Incidentally, the default output when you assemble using GNU as foo.s is a.out, that is a shorthand for Assembler Output. (But the same filename is the gcc default for linker output, with the assembler output being only a temporary.)
Boot loaders have a special directive defined, back in the days of DOS, it would be common to find a directive such as .Org 100h, which defines the assembler code to be of the old .COM variety before .EXE took over in popularity. Also, you did not need to have a assembler to produce a .COM file, using the old debug.exe that came with MSDOS, did the trick for small simple programs, the .COM files did not need a linker and were straight ready-to-run binary format. Here's a simple session using DEBUG.
1:*a 0100
2:* mov AH,07
3:* int 21
4:* cmp AL,00
5:* jnz 010c
6:* mov AH,07
7:* int 21
8:* mov AH,4C
9:* int 21
10:*
11:*r CX
12:*10
13:*n respond.com
14:*w
15:*q
This produces a ready-to-run .COM program called 'respond.com' that waits for a keystroke and not echo it to the screen. Notice, the beginning, the usage of 'a 100h' which shows that the Instruction pointer starts at 100h which is the feature of a .COM. This old script was mainly used in batch files waiting for a response and not echo it. The original script can be found here.
Again, in the case of boot loaders, they are converted to a binary format, there was a program that used to come with DOS, called EXE2BIN. That was the job of converting the raw object code into a format that can be copied on to a bootable disk for booting. Remember no linker is run against the assembled code, as the linker is for the runtime environment and sets up the code to make it runnable and executable.
The BIOS when booting, expects code to be at segment:offset, 0x7c00, if my memory serves me correct, the code (after being EXE2BIN'd), will start executing, then the bootloader relocates itself lower down in memory and continue loading by issuing int 0x13 to read from the disk, switch on the A20 gate, enable the DMA, switch onto protected mode as the BIOS is in 16bit mode, then the data read from the disk is loaded into memory, then the bootloader issues a far jump into the data code (likely to be written in C). That is in essence how the system boots.
Ok, the previous paragraph sounds abstracted and simple, I may have missed out something, but that is how it is in a nutshell.
To answer the assembly part of the question, assembly doesn't compile to binary as I understand it. Assembly === binary. It directly translates. Each assembly operation has a binary string that directly matches it. Each operation has a binary code, and each register variable has a binary address.
That is, unless Assembler != Assembly and I'm misunderstanding your question.
They compile to a file in a specific format (COFF for Windows, etc), composed of headers and segments, some of which have "plain binary" op codes. Assemblers and compilers (such as C) create the same sort of output. Some formats, such as the old *.COM files, had no headers, but still had certain assumptions (such as where in memory it would get loaded or how big it could be).
On Windows machines, the OS's boostrapper is in a disk sector loaded by the BIOS, where both of these are "plain". Once the OS has loaded its loader, it can read files that have headers and segments.
Does that help?
There are two things that you may mix here. Generally there are two topics:
Executable File Formats (see a list here), for example COFF, XCOFF, ELF
Intermediate Languages, like CIL or GIMPLE or bytecode
The latter may compile to the former in the process of assembly. Some intermediate formats are not assembled, but executed by a virtual machine. In case of C++ it may be compiled into CIL, which is assembled into a .NET assembly, hence there me be some confusion.
But in general C and C++ are usually compiled into binary, or in other words, into a executable file format.
You have a lot of answers to read through, but I think I can keep this succinct.
"Binary code" refers to the bits that feed through the microprocessor's circuits. The microprocessor loads each instruction from memory in sequence, doing whatever they say. Different processor families have different formats for instructions: x86, ARM, PowerPC, etc. You point the processor at the instruction you want by giving it the address of the instruction in memory, and then it chugs merrily along through the rest of the program.
When you want to load a program into the processor, you first have to make the binary code accessible in memory so it has an address in the first place. The C compiler outputs a file in the filesystem, which has to be loaded into a new virtual address space. Therefore, in addition to binary code, that file has to include the information that it has binary code, and what its address space should look like.
A bootloader has different requirements, so its file format might be different. But the idea is the same: binary code is always a payload in a larger file format, which includes at a minimum a sanity check to ensure that it's written in the correct instruction set.
C compilers and assemblers are typically configured to produce static library files. For embedded applications, you're more likely to find a compiler which produces something like a raw memory image with instructions beginning at address zero. Otherwise, you can write a linker which converts the output of the C compiler into whatever else you want.
As I understand it, a chipset (CPU, etc.) will have a set of registers for storing data, and understand a set of instructions for manipulating these registers. The instructions will be things like 'store this value to this register', 'move this value', or 'compare these two values'. These instructions are often expressed in short human-grokable alphabetic codes (assembly language, or assembler) which are mapped to the numbers that the chipset understands - those numbers are presented to the chip in binary (machine code.)
Those codes are the lowest level that the software gets down to. Going deeper than that gets into the architecture of the actual chip, which is something I haven't gotten involved in.
The executable files (PE format on windows) cannot be used to boot the computer because the PE loader is not in memory.
The way bootstrapping works is that the master boot record on the disk contains a blob of a few hundred bytes of code. The BIOS of the computer (in ROM on the motherboard) loads this blob into memory and sets the CPU instruction pointer to the beginning of this boot code.
The boot code then loads a "second stage" loader, on Windows called NTLDR (no extension) from the root directory. This is raw machine code that, like the MBR loader, is loaded into memory cold and executed.
NTLDR has the full capability to load PE files including DLLs and drivers.
ะก(++) (unmanaged) really compiles to plain binary. Some OS-related stuff - are BIOS and OS function calls, they're different for each OS, but still binary.
1. Assembler compiles to pure binary, but, as strange as it gets, it is less optimized than C(++)
2. OS kernel, as well as bootloader, also written in C, so no problems here.
Java, Managed C++, and other .NET stuff, compiles into some pseudocode (MSIL in .NET), which makes it cross-OS and cross-platform, but requires local interpreter or translator to run.