Solaris 11.4 system header files prevent usage of stlport4 library - c++

Using this one line C++ program with any Solaris Studio compiler, gives an error on Solaris 11.4 when using the -library=stlport4 option.
hello.cpp
#include <iostream>
int main()
{
std::cout << "Hello world" << std::endl;
return 0;
}
$ /opt/solarisstudio12.4/bin/CC -m64 hello.cpp -o hello -library=stlport4
"/opt/solarisstudio12.4/lib/compilers/include/CC/stlport4/stl/_stdio_file.h", line 161: Error: __pad is not a member of const __FILE.
"/opt/solarisstudio12.4/lib/compilers/include/CC/stlport4/stl/_stdio_file.h", line 163: Error: __pad is not a member of const __FILE.
"/opt/solarisstudio12.4/lib/compilers/include/CC/stlport4/stl/_stdio_file.h", line 165: Error: __pad is not a member of const __FILE.
"/opt/solarisstudio12.4/lib/compilers/include/CC/stlport4/stl/_stdio_file.h", line 165: Error: __pad is not a member of const __FILE.
Please help me resolve this issue.

The 64-bit Solaris FILE structure is opaque:
64-bit applications should not rely on having access to the members of the FILE data structure. Attempts to access private implementation-specific structure members directly can result in compilation errors. Existing 32-bit applications are unaffected by this change, but any direct usage of these structure members should be removed from all code.
You can't access the FILE structure when doing a 64-bit compile on Solaris. It should work if you compile a 32-bit binary, though.
Why did Sun do this?
Because Sun (and now Oracle, for at least a bit longer) provide actual binary compatibility guarantees:
A binary application built on Solaris 2.6 or later that makes use of operating system interfaces as defined in stability.5 run on subsequent releases of Oracle Solaris, including their initial releases and all updates, even if the application has not been recompiled for those latest releases.
That's Oracle's guarantee. Sun's long-ago guarantee was actually stronger, pretty much saying if your code compiled, no later update to Solaris would break it.
And early versions of Solaris had only an 8-bit field for the FILE's associated file descriptor. And that file descriptor field was visible, and code built on early versions of Solaris used it.
So Sun was stuck with an 8-bit field for the file descriptor in the FILE structure.
But that was over three decades ago - before 64-bit processors came about.
And there were no legacy 64-bit binaries Sun had to worry about being binary forward-compatible with.
Since Sun only guaranteed binary compatibility, Sun made the new 64-bit FILE structure opaque so no compliant code could access it. (Yes, Sun was providing 64-bit systems in the early 1990s. My Little Pony killed a great, innovative company.)
Sun did provide an extended FILE library that could be used by programs needing more than 256 FILE's open at any one time:
The extended FILE facility allows 32-bit processes to use any valid file descriptor with the standard I/O (see stdio(3C)) C library functions. Historically, 32-bit applications have been limited to using the first 256 numerical file descriptors for use with standard I/O streams. By using the extended FILE facility this limitation is lifted. Any valid file descriptor can be used with standard I/O.
In Solaris 11.4 that now reads:
The extendedFILE.so.1 is an obsolete, empty, library, kept for binary compatibility only.
Its old purpose, the use of file descriptors larger than 255 for 32 bit binaries, is now the default behavior in Oracle Solaris.
The libc library now handles the environment variables originally handled by extendedFILE.so.1
The bottom line is if you want to access the 64-bit FILE structure on Solaris, you're not going to be able to do it.

This is a bug in the stlport4 headers provided with the compiler, Oracle Bug 27531287. Patches for Studio 12.3, 12.4, 12.5, and 12.6 are available to customers with current support contracts.
(Andrew Henle's answer explains the underlying problem, the stlport4 headers had depended on something they shouldn't have, and broke when that changed in Solaris 11.4.)

Related

Horrid error with strtod(): glibc-2.13 NOT backwards compatible with glibc-2.9?

I'm working on C and C++ programs which need to run on several different embedded platforms, for which I have cross-complilers so I can do the build on my x86 desktop.
I have a horrible problem with certain functions, e.g. "strtod()". Here's my simple test program:
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv)
{
if ( (argc < 2) || (NULL == argv[1]) ) return 0;
double myDouble = strtod(argv[1], NULL);
printf("\nValue: %f\n\n", myDouble);
return 0;
}
Normally I build all programs with dynamic linking to keep the binaries as small as possible. The above works fine on the x86 and Power PC. However, on the Arm system (BeagleBoard xM with Debian) strtod() misbehaves (the program always outputs "0.000000").
I tried building the program with the option '-static', and that worked on the Beagle:
root#beaglexm:/app# ./test.dynamic 1.23
Value: 0.000000
[Dynamic linked version - WRONG!!]
root#beaglexm:/app# ./test.static 1.23
Value: 1.230000
[Correct!!]
I also tested on a BeagleBone Black, which has a slightly different distribution. Both versions (static and dynamic) worked fine on the BBB.
Digging around in the libraries, I found the following version numbers:
Cross Compiler Toolchain: libc-2.9.so
BeagleBoard XM (DOESN'T WORK): libc-2.13.so
BeagleBone Black (WORKS!): libc-2.16.so
So my cross compiler is building against an older version of glibc. I've read in several places that glibc should be backwards-compatible.
I thought about static linking only libc, but according to this question it's a bad idea unless all libraries are statically linked.
Static linking everything does work, but there are serious constraints on the system which mean I need to keep the binaries as small as possible.
Any ideas what would cause horrible problems with strtod() (and similar functions) and/or why glibc 2.13 is not backwards compatible?
EDIT:
I didn't mention that the "soname" (i.e. top level name) is the same on all platforms: "libc.so.6" From my reading of the docs, the number AFTER the .so in the "soname" is the major version and only changes if the interface changes - hence all these versions should be compatible. The number BEFORE the .so which appears in the actual file name (shown above, and found by following the symlink) is the minor version. See: link
Generally version numbers reflect compatibility. The number that appears between the .so and the next dot represents a MAJOR revision, not guaranteed compatible with any other major revision.
The number(s) that that follow that, which you'll only see if you follow the symbolic links, represents a MINOR revision. These can be used interchangably, and symlinks are used to do just that. The program links against libc.so.6 or whatever, and on the actual filesystem, libc.so.6 is a symbolic link to (for example) libc.so.6.12.
glibc tries to maintain compatibility even across major revisions, but there are times when they simply have to accept a breaking change. Typically this would be when a new version of the C or POSIX standards are released and function signatures get updated in a way that breaks binary compatibility.
Any numbers that appear before the .so will also break compatibility if changed; these usually represent a complete rewrite of a program. For example glib vs glib2. Not of concern for libc.
The tool ldd is very useful for investigating library dependencies and discovering while exact version of the library is actually being loaded.

Compile errors using PST SDK

I am porting a project from Windows to Linux/Ubuntu, which involves using open software called "PST SDK"
(http://pstsdk.codeplex.com) written in c++. This has not been updated since 2010 but it works fine in
Windows and supposedly works/did work in Linux. I set up a demo program with nothing more than including
the header files (the library is all headers, nothing to link). I had a lot of errors but got them
fixed by using g++ instead of gcc, and fiddling with the location of the library files and required
boost files.
However once I tried making some calls, I ran into problems. I got a few things working, but the
following code:
std::vector<pstsdk::folder> folderlist;
folderlist.push_back(folder);
causes this compile error:
error: 'pstsdk::property_bag& pstsdk::property_bag::operator=(const pstsdk::property_bag&)' is private
(There is a lot of other veribiage about what was instantiated from what file). Here is the compile command:
g++ -c -I/usr/local/include -Iboost_1_46_1 -Ipstsdk -I/usr/local/include/mysql ostdemo.cpp
It is specifically the push_back call causing the errors - take that out and they go away. Of course
that is critical to the working of my program. Any idea what this could be? I assume it has
something to do with my compiler version or switches, but I can't figure it out. I am not much of
a c++ programmer so any help would be appreciated.
Your vector::push_back() requires that the type is copy-assignable. Obviously, your pstsdk::folder is not copy-assignable due to the assignment operator being private.
What are the requirements for a type to be placed in a vector? It depends on whether you're using pre-C++11 or C++11, plus what operations you plan to do on these types. See here:
http://en.cppreference.com/w/cpp/container/vector
Pay attention to CopyAssignable, CopyConstructible, MoveAssignable and MoveConstructible
So the case of it working with Windows as opposed to Linux:
Remember that "Windows" and "Linux" are not C++ compilers. You need to expand on this and tell us what version of the g++ compiler you're using on each OS.

How should I use g++'s -finput-charset compiler option correctly in order to compile a non-UTF-8 source file?

I'm trying to compile a UTF-16BE C++ source file in g++ with -finput-charset compiler option but I'm always getting a bunch of errors. More details follow.
My environment(in CentOS Linux):
g++: 4.1.2
iconv: 2.5
Linux language(in Terminal): LANG="en_US.UTF-8"
My sample source file(stored in UTF-16BE encoding):
// main.cpp:
#include <iostream>
int main()
{
std::cout << "Hello, UTF-16" << std::endl;
return 0;
}
My steps:
I read the manual of g++ about the -finput-charset option. The g++ manual says:
-finput-charset=charset
Set the input character set, used for translation from the character set of the input file to the source character set used by
GCC. If the locale does not specify, or GCC cannot get this
information from the locale, the default is UTF-8. This can be
overridden by either the locale or this command line option.
Currently the command line option takes precedence if there’s a
conflict. charset can be any encoding supported by the system’s
"iconv" library routine.
Thus I entered the command as follows:
g++ -finput-charset=UTF-16BE main.cpp
and I got these errors:
In file included from main.cpp:1:
/usr/lib/gcc/i386-redhat-linux/4.1.2/../../../../include/c++/4.1.2/iostream:1:
error: stray ‘\342’ in program
/usr/lib/gcc/i386-redhat-linux/4.1.2/../../../../include/c++/4.1.2/iostream:1:
error: stray ‘\274’ in program
...(repeatedly, A LOT, around 4000+)...
/usr/lib/gcc/i386-redhat-linux/4.1.2/../../../../include/c++/4.1.2/iostream:1:
error: stray ‘\257’ in program
main.cpp: In function ‘int main()’:
main.cpp:5: error: ‘cout’ is not a member of ‘std’
main.cpp:5: error: ‘endl’ is not a member of ‘std’
The manual text suggests that the charset can be any encoding supported by 'iconv' routine, thus I guessed the compilation errors might be caused by my iconv library. I then tested the iconv:
iconv --from-code=UTF-16BE --to-code=UTF-8 --output=main_utf8.cpp main.cpp
A "main_utf8.cpp" file is generated as expected. I then tried to compile it:
g++ -finput-charset=UTF-8 main_utf8.cpp
Note that I specified the input-charset explicitly to see if I did anything wrong, but this time a "a.out" was generated without any errors. When I ran it, it could produce the correct output.
Finally...
I couldn't figure out where I did wrong. I searched in the web trying to find out some examples for this compiler option but I couldn't.
Please advise! Thanks!
Further edits:
Thanks, guys! Your replies are quick! Some updates:
When I said "UTF-16" I meant "UTF-16 + BOM". In fact I used UTF-16BE. I have updated the text above.
Some answers say the errors are caused by the non-UTF-16 header files. Here are my thoughts if this is the case: We'll always include some standard header files when writing a C/C++ project, right? Such as stdio.h or iostream. If the G++ compiler only deals with the encoding of the source files created by us but never with the source files in the standard library, then what does this -finput-charset option exist for??
Final edit:
At last, my solution is like this:
At the beginning, I changed the encoding of my source files to GB2312, as "Mr Lister" said below. This worked fine for a while, but later I found it not suitable for my situation because most of the other parts in the system still use UTF-8 for communication and interfaces, thus I must convert the encoding in many places... Not only an overhead of my work, it may also result in some performance decrease in my program.
Later I tried to convert all my source files to UTF-8 + BOM. In this way, Visual Studio in Windows could compile them happily but GCC in Linux would complain. I then wrote a shell script to remove the BOM, and before I want to compile my code with GCC, I run this script first.
Luckily, I don't have to build the code in Linux manually because TeamCity the continuous integration tool is used in my project to generate the build automatically. I could change the build steps in TeamCity to help me run this script before the daily build starts.
With this UTF-8 + BOM + script method, I decide not to edit my source code in Linux, because if I want to do so, I must make sure my code could build successfully before I commit it, which means I must run the script to remove the BOM before I build the code, which means SVN would report EVERY file is modified(BOM removed) thus make it very easy to mistakenly commit a wrong file. To solve this problem, I wrote another shell script to add the BOM back to the source files. Though I still don't edit my code very often in Linux, but when I really need to, I don't have to face the terribly long change list in the commit dialog.
Encoding Blues
You cannot use UTF-16 for source code files; because the header you are including, <iostream>, is not UTF-16-encoded. As #include includes the files verbatim, this means that you suddenly have an UTF-16-encoded file with a large chunk (approximately 4k, apparently) of invalid data.
There is almost no good reason to ever use UTF-16 for anything, so this is just as well.
Edit: Regarding problems with encoding support: The OSes themselves are not responsible for providing encoding support, this comes down to the compilers used.
g++ on Windows supports absolutely all of the same encodings as g++ on Linux, because it's the same program, unless whatever version of g++ you are using on Windows relies on a deeply broken iconv library.
Inspect your toolchain and ensure that all your tools are in working order.
As an alternative; don't use Chinese in the source files, but write them in English, using English-language literals, or simple TOKEN_STYLE_PLACEHOLDERs, using l10n and i18n to replace these in the running executable.
Threedit: -finput-charset is almost certainly a holdover from the days of codepages and other nonsense of the kind; however; an ISO-8859-n file will almost always be compatible with UTF-8 standard headers, however, see the reedit below.
Reedit: For next time; remember a simple mantra: "N'DUUH!"; "Never Don't Use UTF-8!"
I18N
A common solution to this kind of problem is to remove the problem entirely, by way of, for instance, gettext.
When using gettext, you usually end up with a function loc(char *) that abstracts away most of the translation tool specific code. So, instead of
#include <iostream>
int main () {
std::cout << "瓜田李下" << std::endl;
}
you would have
#include <iostream>
#include "translation.h"
int main () {
std::cout << loc("DEEPER_MEANING") << std::endl;
}
and, in zh.po:
msgid DEEPER_MEANING
msgstr "瓜田李下"
Of course, you could also then have a en.po:
msgid DEEPER_MEANING
msgstr "Still waters run deep"
This can be expanded upon, and the gettext package has tools for expansion of strings with variables and such, or you could use printf, to account for different grammars.
The Third Option
Instead of having to deal with multiple compilers with different requirements for file encodings, file endings, byte order marks, and other problems of the kind; it is possible to cross-compile using MinGW or similar tools.
This option requires some setup, but may very well reduce future overhead and headaches.
The error message says the problem is in the include files, so I presume what happens is that the include files are normal UTF-8, but the compiler wants to treat them as UTF-16 because of the compiler switch.
So I'm afraid the solution is to always convert the source to UTF-8 first; perhaps in the makefile. Or to find a solution that doesn't contain include files in other encodings...
Edit:
Maybe a GB encoding would work, if and only if none of the system source files contain any non-ASCII characters. Then you could tell the compiler they were GB encoded without problem.
This does not work because the compiler will also try to read the header files as UTF-16, which they are not.
UTF-16 is not an encoding for bytes. It's an encoding where your basic storage unit is 16 bits large.
When you want to store UTF-16 in a byte sequence you have to choose between UTF-16BE and UTF-16LE.

SIMD Sony Vector Math Library in OS X with C++

I'm currently writing a very simple game engine for an assignment and to make the code a lot nicer I've decided to use a vector math library. One of my lecturers showed me the Sony Vector Math library which is used in the Bullet Physics engine and it's great as far as I can see. I've got it working on Linux nicely but I'm having problems porting it to work on OS X (intel, Snow Leopard). I have included the files correctly in my project but the C++ version of the library doesn't seem to compile. I can get the C version of the library working but it has a fairly nasty API compared to the C++ version and the whole reason of using this library was to neaten the code in the first place.
http://glosx.blogspot.com/2008/07/sony-vector-math-library.html
This blog post that I've stumbled upon seems to suggest something's up with the compiler? It's fairly short so I couldn't take a lot of information from it.
When I try to use the C++ version I get the following errors (expanded view of each error):
/usr/include/vectormath/cpp/../SSE/cpp/vectormath_aos.h:156:0
/usr/include/vectormath/cpp/../SSE/cpp/vectormath_aos.h:156:
error: '__forceinline' does not name a type
second error:
/Developer/apps/gl test/main.cpp:7:0 In file included from /Developer/apps/gl test/main.cpp
/usr/include/vectormath/cpp/vectormath_aos.h:38:0 In file included from
/usr/include/vectormath/cpp/vectormath_aos.h
/usr/include/vectormath/cpp/../SSE/cpp/vectormath_aos.h:330:0 In file included from
/usr/include/vectormath/cpp/../SSE/cpp/vectormath_aos.h
/usr/include/vectormath/cpp/../SSE/cpp/vecidx_aos.h:45:0 Expected constructor, destructor,
or type conversion before '(' token in /usr/include/vectormath/cpp/../SSE/cpp/vecidx_aos.h
Finally two errors at the end of the main.cpp file:
Expected '}' at the end of input
Expected '}' at the end of input
I've Googled my heart out but I can't seem to find any answers or anything to point me in the right direction so any help will be greatly received.
Thanks,
__forceinline is a reserved word that is supported by only a couple compilers. Clearly, your compiler does not support the __forceinline keyword and the code in question is non-portable.
A very poor workaround would be to pass a new define to your compiler that gives the keyword the correct meaning. E.g.: -D__forceinline=inline or -D__forceinline=__attribute__((always_inline)) (Thanks Paul!)
The SSE version was assumed to be only for Microsoft Visual Studio. For other platforms (Mac etc) you can use the scalar version.
Bullet\Extras\vectormathlibrary\include\vectormath\scalar\cpp
It looks like someone's fixed this up and posted a patched version in response to this very issue.
Now GCC compliant.
Which compiler are you using on OS X ? There are 4 to choose from in the standard Xcode 3.2 install and the default is gcc 4.2. You might be better off trying gcc 4.0.

Are object files platform independent?

Is it possible to compile program on one platform and link with other ? What does object file contain ? Can we delink an executable to produce object file ?
No. In general object file formats might be the same, e.g. ELF, but the contents of the object files will vary from system to system.
An object file contains stuff like:
Object code that implements the desired functionality
A symbol table that can be used to resolve references
Relocation information to allow the linker to locate the object code in memory
Debugging information
The object code is usually not only processor specific, but also OS specific if, for example, it contains system calls.
Edit:
Is it possible to compile program on one platform and link with other ?
Absolutely. If you use a cross-compiler. This compiler specifically targets a platform and generates object files (and programs) that are compatible with the target platform. So you can use an X86 Linux system, for example, to make programs for a powerpc or ARM based system using the appropriate cross compiler. I do it here.
Is it possible to compile program on one platform and link with other ?
In general, no. Object files are compiler specific. Some compilers spit out COFF, others spit out ELF, etc. On top of that, you have to worry calling conventions, system calls, etc. This is platform dependent.
What does object file contain ?
Symbol tables, code, relocation, linking and debugging information.
If what you're after is portability, then write portable C/C++ and let a platform-specific compliant compiler do the work.
In practice, no. There are several things that would have to be the same:
- OS interface (same system calls)
- memory layout of data (endianness, struct padding, etc.)
- calling convention
- object file format (e.g. ELF is pretty standard on Linux)
Lookup ABI for more information.
It doesn't need to be said again: C/C++ object files aren't portable.
On the other hand, ANSI C is one of the most portable languages there is. You may not be able to pick up your object files, but recompiling your source is likely to work if you stick to the ANSI C standard. This might also be true of C++ as well.
I don't know how universal GNU C++ is, but if you can compile with gcc on one computer you're good to go on any other machine that also has gcc installed. Just about every machine you can think of has a C compiler. That's portability.
No. They are not platform independent. Take for instance, the GNU C Compiler (gcc), that generates ELF binary files. Windows compilers (Borland, Microsoft, Open Watcom) can produce Windows Binary PE (Portable Executable) format. Novell binaries are NLM (Netware Loadabable Module) format.
These examples above of the different outputs which is compiler dependant, there is no way, a linker on a Windows platform, would know anything about ELF format nor NLM format, hence it is impossible to combine different formats to produce an executable that can run on any platform.
Take the Apple's Mac OSX (before the Intel chips were put in), they were running on the PowerPC platform, even if it has the GNU C Compiler, the binary is specifically for the PowerPC platform, if you were to take that binary and copy it onto a Linux platform, it will not run as a result of the differences in the instructions of the platform's microprocessor i.e. PowerPC.
Again, same principle would apply to the OS/390 mainframe system, a GNU C compiler that produces a binary for that platform will not run on an pre-Intel Apple Mac OSX.
Edit: To further clarify what an ELF format would look like see below, this was obtained by running objdump -s main.o under Linux.
main.o: file format elf32-i386
Contents of section .text:
0000 8d4c2404 83e4f0ff 71fc5589 e55183ec .L$.....q.U..Q..
0010 14894df4 a1000000 00a30000 0000a100 ..M.............
0020 000000a3 00000000 8b45f483 38010f8e .........E..8...
0030 9c000000 8b55f48b 420483c0 048b0083 .....U..B.......
0040 ec086800 00000050 e8fcffff ff83c410 ..h....P........
0050 a3000000 00a10000 000085c0 7520a100 ............u ..
0060 00000050 6a1f6a01 68040000 00e8fcff ...Pj.j.h.......
0070 ffff83c4 10c745f8 01000000 eb5a8b45 ......E......Z.E
0080 f4833802 7e218b55 f48b4204 83c0088b ..8.~!.U..B.....
0090 0083ec08 68240000 0050e8fc ffffff83 ....h$...P......
00a0 c410a300 000000a1 00000000 85c07520 ..............u
00b0 a1000000 00506a20 6a016828 000000e8 .....Pj j.h(....
00c0 fcffffff 83c410c7 45f80100 0000eb08 ........E.......
00d0 e8fcffff ff8945f8 8b45f88b 4dfcc98d ......E..E..M...
00e0 61fcc3 a..
Contents of section .rodata:
0000 72000000 4552524f 52202d20 63616e6e r...ERROR - cann
0010 6f74206f 70656e20 696e7075 74206669 ot open input fi
0020 6c650a00 77000000 4552524f 52202d20 le..w...ERROR -
0030 63616e6e 6f74206f 70656e20 6f757470 cannot open outp
0040 75742066 696c650a 00 ut file..
Contents of section .comment:
0000 00474343 3a202847 4e552920 342e322e .GCC: (GNU) 4.2.
0010 3400 4.
Now compare that to a PE format for a simple DLL
C:\Program Files\Microsoft Visual Studio 9.0\VC\bin>dumpbin /summary "C:\Documents and Settings\Tom\My Documents\Visual Studio 2008\Projects\SimpleLib\Release\SimpleLib.dll"
Microsoft (R) COFF/PE Dumper Version 9.00.30729.01
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file C:\Documents and Settings\Tom\My Documents\Visual Studio 2008\Projects\SimpleLib\Release\SimpleLib.dll
File Type: DLL
Summary
1000 .data
1000 .rdata
1000 .reloc
1000 .rsrc
1000 .text
Notice the differences in the sections, under ELF, there is .bss, .text, .rodata and .comment, and is an ELF format for i386 processor.
Hope this helps,
Best regards,
Tom.
They are platform dependent. For example file-command prints out following:
$ file foo.o
foo.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped
C++ has the additional detail that the names that it puts into an object file are typically 'mangled' to deal type safety for names that are overloaded. The methods used to mangle names are not part of the C++ standard (in fact, name mangling is an implementation detail that's not required at all if the vendor can come up with a different way to implement overloading). So even for the same platform target, you cannot count on being able to link object files from one compiler vendor to another.
There are times when a compiler vendor might change the name mangling scheme from one compiler version to another. For example, I believe there are versions of MSVC for which you can't reliably link C++ object files from an older version to a newer version.
Some platforms have the name mangling specified in an ABI standard for the platform (such as ARM which uses the name mangling specified in the generic C++ ABI that was originally developed for SVr4 on Itanium), but others don't (Windows). Even for the ARM, I'm not sure how interoperable the ABI standard makes linking C++ object files that were created by different compilers.
I just wanted to say that as long as they use the same processor architecture and object format, as well as calling convention(usually nowadays, the processor maker creates one), there are many chances for object files to work interchangeably.
However, even in C the compiler makes some assumptions about certain library functions like stack protection(that I know of) being present, which need not be the same on both platforms. in the case that such code is generated, the objects will not be directly compatible.
System calls are not really relevant as long as the systems share them at all as normally they are called through C wrappers in the standard libraries.
In the end this only applies to C and very similar OSes like Linux and the BSDs, but it can happen.
It's possible to compile with GCC and create an object file in ELF file format and convert the object file to work in Visual Studio. I have done this multiple times now.
There are three things you need to know to do this: the function calling convention, the object file format, and the function name mangling.
Function calling conventions: For 32-bit mode the function calling convention is easy: they are the same for Windows and Unix. For 64-bit mode Windows and Unix use different calling conventions. Therefore, in 64-bit mode you have to get the calling convention correct. You can either do this when you compile or from the object file itself. It's much easier to do this when you compile. To have GCC use the Windows calling convention use -mabi=ms. To do this from the object file you need a tool. Agner Fog's objconv tool can do this for some functions.
Objection file format: To convert the object file format you need a tool. I use Agner Fog's objconv tool for this. It can convert from several different object file formats. For example to convert ELF64 to COFF64 (PE32+) do objconv -fcoff64 foo.o foo.obj.
Function name mangling: Due to function overloading in C++ compilers mangle the functions names. The details for each compiler can be found in Agner Fog's manual calling convetions. GCC and Visual Studio mangle function names differently. To work around this proceed function defintions with extern "C"
If you get all three of these correct and you don't make any OS specific calls than you may be able to use your object files between compilers successfully. There are other problems that can occur of course. See the manual in objconv for more details. But so far this method had worked well for me.