I have a program that uses the following to write floats to a file, which will ultimately be read on a user's computer.
// computer A
float buffer[1024];
...
fwrite(reinterpret_cast<void*>(buffer), sizeof(float), 1024, file);
// computer B
float buffer[1024];
fread(reinterpret_cast<void*>(buffer), sizeof(float), 1024, file);
The programs on the two computers are not the same, but they are compiled with the same compiler and settings (I wouldn't expect this to work out otherwise). Will the floats be interpreted as expected across all typical desktop computers given both programs are compiled to target the platform, or is it possible the second computer will interpret the bytes differently?
The programs on the two computers are not the same, but they are
compiled with the same compiler and settings (I wouldn't expect this
to work out otherwise). Will the floats be interpreted as expected
across all typical desktop computers, or is it possible the second
computer will interpret the bytes differently?
Pretty much all modern desktop computers use IEEE 754 floating point format for their single-precision floating point numbers, so you should be okay.
One potential fly in the ointment is endian-ness: if you write out the file on a computer with a big-endian CPU and then read it on a computer that has a little-endian CPU (or vice-versa) then the reading computer will not interpret the file's values correctly. This is not a big problem in the last few years since almost all commonly used CPUs are little-endian these days, but previously that problem was commonly seen e.g. when transferring data from an Intel-based computer to a PowerPC-based computer, or vice-versa. A common way to handle the problem would be to specify a standard/canonical endian-ness (doesn't matter which one) for the values in your file, and be sure to byte-swap the values when saving (or loading) the file if the computer you are saving/loading them on doesn't match the canonical endian-ness specified by your file format.
This is probably a duplicate question...
It is definitely possible that the format can be interpreted differently.
However, you said "typical desktop computers", which generally means x86, x64, maybe ARM. But all of these use little-endian binary formats. So in practice you'd probably be OK.
Everything depends on compiler. But to avoid problems you can check some features of float for platform.
E.g. check std::numeric_limits<float>::digits, and if is 24, it means that compiler using the IEEE standard double-precision for float.
Related
I am currently working on porting a piece of code written and compiled for SGI using MIPSPro to RHEL 6.7 with gcc 4.4.7. My target architecture is x86_64 I was able to generate an executable for this code and now I am trying to run it.
I am trying to read a binary data from a file, this file was generated in the SGI system by basically casting object's pointer to a char* and saving that to a file. The piece of binary data that I am trying to read has more or less this format:
[ Header, Object A , Object B, ..., Object N ]
Where each object is an instantiation of different classes.
The way the code currently processes the file is by reading it all into memory, and taking the pointer to where the object starts and using reinterpret_class<Class A>(pointer) to it. Something tells me that the people who original designed this were not concerned about portability.
So far I was able to deal with the endianness of the Header object by just swapping the bytes. Unfortunately, Objects A, B, .., N all contain fields of type double and trying to do a byte-swap for 8 bytes does not seem to work.
My question then is, are doubles in SGI/MIPSPro structured differently than in Linux? I know that the sizeof(double) in the SGI machine returns 8 so I think they are of the same size.
According to the MIPSPro ABI:
the MIPS processors conform to the IEEE 754 floating point standard
Your target platform, x86_64, shares this quality.
As such, double means IEEE-754 double-precision float on both platforms.
When it comes to endianness, x86_64 processors are little-endian; but, according to the MIPSpro assembly programmers' guide, some MIPSPro processors are big-endian:
For R4000 and earlier systems, byte ordering is configurable into either big-endian or little-endian byte ordering (configuration occurs during hardware reset). When configured as a big-endian system, byte 0 is always the most-significant (leftmost) byte. When configured as a little-endian system, byte 0 is always the least-significant (rightmost byte).
The R8000 CPU, at present, supports big-endian only
So, you will have to check the datasheet for the original platform and see whether any byte swapping is needed.
I read on internet that standard byte order for networks is big endian, also known as network byte order. Before transferring data on network, data is first converted to network byte order (big endian).
But can any one please let me know who will take care of this conversion.
Whether the code developer do really worry about this endianness? If yes, can you please let me know the examples where we need to take care (in case of C, C++).
The first place where the network vs native byte order matters is in creating sockets and specifying the IP address and port number. Those must be in the correct order or you will not end up talking to the correct computer, or you'll end up talking to the incorrect port on the correct computer if you mapped the IP address but not the port number.
The onus is on the programmer to get the addresses in the correct order. There are functions like htonl() that convert from host (h) to network (n) order; l indicates 'long' meaning '4 bytes'; s indicates 'short' meaning '2 bytes' (the names date from an era before 64-bit systems).
The other time it matters is if you are transferring binary data between two computers, either via a network connection correctly set up over a socket, or via a file. With single-byte code sets (SBCS), or UTF-8, you don't have problems with textual data. With multi-byte code sets (MBCS), or UTF-16LE vs UTF-16BE, or UTF-32, you have to worry about the byte order within characters, but the characters will appear one after the other. If you ship a 32-bit integer as 32-bits of data, the receiving end needs to know whether the first byte is the MSB (most significant byte — for big-endian) or the LSB (least significant byte — for little-endian) of the 32-bit quantity. Similarly with 16-bit integers, or 64-bit integers. With floating point, you could run into the additional problem that different computers could use different formats for the floating point, independently of the endianness issue. This is less of a problem than it used to be thanks to IEEE 744.
Note that IBM mainframes use EBCDIC instead of ASCII or ISO 8859-x character sets (at least by default), and the floating point format is not IEEE 744 (pre-dating that standard by a decade or more). These issues, therefore, are crucial to deal with when communicating with the mainframe. The programs at the two ends have to agree with how each end will understand the other. Some protocols define a byte order (e.g. network byte order); others define 'sender makes right' or 'receiver makes right' or 'client makes right' or 'server makes right', placing the conversion workload on different parts of the system.
One advantage of text protocols (especially those using an SBCS) is that they evade the problems of endianness — at the cost of converting text to value and back, but computation is cheap compared to even gigabit networking speeds.
In C and C++, you will have to worry about endianness in low level network code. Typically the serialization and deserialization code will call a function or macro that adjusts the endianness - reversing it on little endian machines, doing nothing on big endian machines - when working with multibyte data types.
Just send stuff in the correct order that the receiver can understand,
i.e. use http://www.manpagez.com/man/3/ntohl/ and their ilk.
We are all fans of portable C/C++ programs.
We know that sizeof(char) or sizeof(unsigned char) is always 1 "byte". But that 1 "byte" doesn't mean a byte with 8 bits. It just means a "machine byte", and the number of bits in it can differ from machine to machine. See this question.
Suppose you write out the ASCII letter 'A' into a file foo.txt. On any normal machine these days, which has a 8-bit machine byte, these bits would get written out:
01000001
But if you were to run the same code on a machine with a 9-bit machine byte, I suppose these bits would get written out:
001000001
More to the point, the latter machine could write out these 9 bits as one machine byte:
100000000
But if we were to read this data on the former machine, we wouldn't be able to do it properly, since there isn't enough room. Somehow, we would have to first read one machine byte (8 bits), and then somehow transform the final 1 bit into 8 bits (a machine byte).
How can programmers properly reconcile these things?
The reason I ask is that I have a program that writes and reads files, and I want to make sure that it doesn't break 5, 10, 50 years from now.
How can programmers properly reconcile these things?
By doing nothing. You've presented a filesystem problem.
Imagine that dreadful day when the first of many 9-bit machines is booted up, ready to recompile your code and process that ASCII letter A that you wrote to a file last year.
To ensure that a C/C++ compiler can reasonably exist for this machine, this new computer's OS follows the same standards that C and C++ assume, where files have a size measured in bytes.
...There's already a little problem with your 8-bit source code. There's only about a 1-in-9 chance each source file is a size that can even exist on this system.
Or maybe not. As is often the case for me, Johannes Schaub - litb has pre-emptively cited the standard regarding valid formats for C++ source code.
Physical source file characters are mapped, in an
implementation-defined manner, to the basic source character set
(introducing new-line characters for end-of-line indicators) if
necessary. Trigraph sequences (2.3) are replaced by corresponding
single-character internal representations. Any source file character
not in the basic source character set (2.2) is replaced by the
universal-character-name that des- ignates that character. (An
implementation may use any internal encoding, so long as an actual
extended character encountered in the source file, and the same
extended character expressed in the source file as a
universal-character-name (i.e. using the \uXXXX notation), are handled
equivalently.)
"In an implementation-defined manner." That's good news...as long as some method exists to convert your source code to any 1:1 format that can be represented on this machine, you can compile it and run your program.
So here's where your real problem lies. If the creators of this computer were kind enough to provide a utility to bit-extend 8-bit ASCII files so they may be actually stored on this new machine, there's already no problem with the ASCII letter A you wrote long ago. And if there is no such utility, then your program already needs maintenance and there's nothing you could have done to prevent it.
Edit: The shorter answer (addressing comments that have since been deleted)
The question asks how to deal with a specific 9-bit computer...
With hardware that has no backwards-compatible 8-bit instructions
With an operating system that doesn't use "8-bit files".
With a C/C++ compiler that breaks how C/C++ programs have historically written text files.
Damian Conway has an often-repeated quote comparing C++ to C:
"C++ tries to guard against Murphy, not Machiavelli."
He was describing other software engineers, not hardware engineers, but the intention is still sound because the reasoning is the same.
Both C and C++ are standardized in a way that requires you to presume that other engineers want to play nice. Your Machiavellian computer is not a threat to your program because it's a threat to C/C++ entirely.
Returning to your question:
How can programmers properly reconcile these things?
You really have two options.
Accept that the computer you describe would not be appropriate in the world of C/C++
Accept that C/C++ would not be appropriate for a program that might run on the computer you describe
Only way to be sure is to store data in text files, numbers as strings of number characters, not some amount of bits. XML using UTF-8 and base 10 should be pretty good overall choice for portability and readability, as it is well defined. If you want to be paranoid, keep the XML simple enough, so that in a pinch it can be easily parsed with simple custom parser, in case a real XML parser is not readily available for your hypothetical computer.
When parsing numbers, and it is bigger than what fits in your numeric data type, well, that's an error situation you need to handle as you see fit in the context. Or use a "big int" library, which can then handle arbitrarily large numbers (with an order of magnitude performance hit compared to "native" numeric data types, of course).
If you need to store bit fields, then store bit fields, that is number of bits and then bit values in whatever format.
If you have a specific numeric range, then store the range, so you can explicitly check if they fit in available numeric data types.
Byte is pretty fundamental data unit, so you can not really transfer binary data between storages with different amount of bits, you have to convert, and to convert you need to know how the data is formatted, otherwise you simply can not convert multi-byte values correctly.
Adding actual answer:
In you C code, do not handle byte buffers, except in isolated functions which you will then modify as appropriate for CPU architecture. For example .JPEG handling functions would take either a struct wrapping the image data in unspecified way, or a file name to read the image from, but never a raw char* to byte buffer.
Wrap strings in a container which does not assume encoding (presumably it will use UTF-8 or UTF-16 on 8-bit byte machine, possibly currently non-standard UTF-9 or UTF-18 on 9-bit byte machine, etc).
Wrap all reads from external sources (network, disk files, etc) into functions which return native data.
Create code where no integer overflows happen, and do not rely on overflow behavior in any algorithm.
Define all-ones bitmasks using ~0 (instead of 0xFFFFFFFF or something)
Prefer IEEE floating point numbers for most numeric storage, where integer is not required, as those are independent of CPU architecture.
Do not store persistent data in binary files, which you may have to convert. Instead use XML in UTF-8 (which can be converted to UTF-X without breaking anything, for native handling), and store numbers as text in the XML.
Same as with different byte orders, except much more so, only way to be sure is to port your program to actual machine with different number of bits, and run comprehensive tests. If this is really important, then you may have to first implement such a virtual machine, and port C-compiler and needed libraries for it, if you can't find one otherwise. Even careful (=expensive) code review will only take you part of the way.
if you're planning to write programs for Quantum Computers(which will be available in the near future for us to buy), then start learning Quantum Physics and take a class on programming them.
Unless you're planning for a boolean computer logic in the near future, then.. my question is how will you make it sure that the filesystem available today will not be the same tomorrow? or how a file stored with 8 bit binary will remain portable in the filesystems of tomorrow?
If you want to keep your programs running through generations, my suggestion is create your own computing machine, with your own filesystem and your own operating system, and change the interface as the needs of tomorrow change.
My problem is, the computer system I programmed a few years ago doesn't exist(Motorola 68000) anymore for normal public, and the program heavily relied on the machine's byte order and assembly language. Not portable anymore :-(
If you're talking about writing and reading binary data, don't bother. There is no portability guarantee today, other than that data you write from your program can be read by the same program compiled with the same compiler (including command-line settings). If you're talking about writing and reading textual data, don't worry. It works.
First: The original practical goal of portability is to reduce work; therefore if portability requires more effort than non-portability to achieve the same end result, then writing portable code in such case is no longer advantageous. Do not target 'portability' simply out of principle. In your case, a non-portable version with well-documented notes regarding the disk format is a more efficient means of future-proofing. Trying to write code that somehow caters to any possible generic underlying storage format will probably render your code nearly incomprehensible, or so annoying to maintain that it will fall out of favor for that reason (no need to worry about future-proofing if no one wants to use it anyway 20 yrs from now).
Second: I don't think you have to worry about this, because the only realistic solution to running 8-bit programs on a 9-bit machine (or similar) is via Virtual Machines.
It is extremely likely that anyone in the near or distant future using some 9+ bit machine will be able to start up a legacy x86/arm virtual machine and run your program that way. Hardware 25-50 years from now should have no problem what-so-ever of running entire virtual machines just for the sake of executing a single program; and that program will probably still load, execute, and shutdown faster than it does today on current native 8-bit hardware. (some cloud services today in fact, already trend toward starting entire VMs just to service individual tasks)
I strongly suspect this is the only means by which any 8-bit program would be run on 9/other-bit machines, due to the points made in other answers regarding the fundamental challenges inherent to simply loading and parsing 8-bit source code or 8-bit binary executables.
It may not be remotely resembling "efficient" but it would work. This also assumes, of course, that the VM will have some mechanism by which 8-bit text files can be imported and exported from the virtual disk onto the host disk.
As you can see, though, this is a huge problem that extends well beyond your source code. The bottom line is that, most likely, it will be much cheaper and easier to update/modify or even re-implement-from-scratch your program on the new hardware, rather than to bother trying to account for such obscure portability issues up-front. The act of accounting for it almost certainly requires more effort than just converting the disk formats.
8-bit bytes will remain until end of time, so don't sweat it. There will be new types, but this basic type will never ever change.
I think the likelihood of non-8-bit bytes in future computers is low. It would require rewriting so much, and for so little benefit. But if it happens...
You'll save yourself a lot of trouble by doing all calculations in native data types and just rewriting inputs. I'm picturing something like:
template<int OUTPUTBITS, typename CALLABLE>
class converter {
converter(int inputbits, CALLABLE datasource);
smallestTypeWithAtLeast<OUTPUTBITS> get();
};
Note that this can be written in the future when such a machine exists, so you need do nothing now. Or if you're really paranoid, make sure get just calls datasource when OUTPUTBUTS==inputbits.
Kind of late but I can't resist this one. Predicting the future is tough. Predicting the future of computers can be more hazardous to your code than premature optimization.
Short Answer
While I end this post with how 9-bit systems handled portability with 8-bit bytes this experience also makes me believe 9-bit byte systems will never arise again in general purpose computers.
My expectation is that future portability issues will be with hardware having a minimum of 16 or 32 bit access making CHAR_BIT at least 16.
Careful design here may help with any unexpected 9-bit bytes.
QUESTION to /. readers: is anyone out there aware of general purpose CPUs in production today using 9-bit bytes or one's complement arithmetic? I can see where embedded controllers may exist, but not much else.
Long Answer
Back in the 1990s's the globalization of computers and Unicode made me expect UTF-16, or larger, to drive an expansion of bits-per-character: CHAR_BIT in C. But as legacy outlives everything I also expect 8-bit bytes to remain an industry standard to survive at least as long as computers use binary.
BYTE_BIT: bits-per-byte (popular, but not a standard I know of)
BYTE_CHAR: bytes-per-character
The C standard does not address a char consuming multiple bytes. It allows for it, but does not address it.
3.6 byte: (final draft C11 standard ISO/IEC 9899:201x)
addressable unit of data storage large enough to hold any member of the basic character set of the execution environment.
NOTE 1: It is possible to express the address of each individual byte of an object uniquely.
NOTE 2: A byte is composed of a contiguous sequence of bits, the number of which is implementation-defined. The least significant bit is called the low-order bit; the most significant bit is called the high-order bit.
Until the C standard defines how to handle BYTE_CHAR values greater than one, and I'm not talking about “wide characters”, this the primary factor portable code must address and not larger bytes. Existing environments where CHAR_BIT is 16 or 32 are what to study. ARM processors are one example. I see two basic modes for reading external byte streams developers need to choose from:
Unpacked: one BYTE_BIT character into a local character. Beware of sign extensions.
Packed: read BYTE_CHAR bytes into a local character.
Portable programs may need an API layer that addresses the byte issue. To create on the fly and idea I reserve the right to attack in the future:
#define BYTE_BIT 8 // bits-per-byte
#define BYTE_CHAR (CHAR_BIT/BYTE_BIT) //bytes-per-char
size_t byread(void *ptr,
size_t size, // number of BYTE_BIT bytes
int packing, // bytes to read per char
// (negative for sign extension)
FILE *stream);
size_t bywrite(void *ptr,
size_t size,
int packing,
FILE *stream);
size number BYTE_BIT bytes to transfer.
packing bytes to transfer per char character. While typically 1 or BYTE_CHAR it could indicate BYTE_CHAR of the external system, which can be smaller or larger than the current system.
Never forget endianness clashes.
Good Riddance To 9-Bit Systems:
My prior experience with writing programs for 9-bit environments lead me to believe we will not see such again, unless you happen to need a program to run on a real old legacy system somewhere. Likely in a 9-bit VM on a 32/64-bit system. Since year 2000 I sometimes make a quick search for, but have not seen, references to current current descendants of the old 9-bit systems.
Any, highly unexpected in my view, future general purpose 9-bit computers would likely either have an 8-bit mode, or 8-bit VM (#jstine), to run programs under. The only exception would be special purpose built embedded processors, which general purpose code would not likely to run on anyway.
In days of yore one 9-bit machine was the PDP/15. A decade of wrestling with a clone of this beast make me never expect to see 9-bit systems arise again. My top picks on why follow:
The extra data bit came from robbing the parity bit in core memory. Old 8-bit core carried a hidden parity bit with it. Every manufacturer did it. Once core got reliable enough some system designers switched the already existing parity to a data bit in a quick ploy to gain a little more numeric power and memory addresses during times of weak, non MMU, machines. Current memory technology does not have such parity bits, machines are not so weak, and 64-bit memory is so big. All of which should make the design changes less cost effective then the changes were back then.
Transferring data between 8-bit and 9-bit architectures, including off-the-shelf local I/O devices, and not just other systems, was a continuous pain. Different controllers on the same system used incompatible techniques:
Use the low order 16-bits of 18 bit words.
Use the low-order 8 bits of 9-bit bytes where the extra high-order bit might be set to the parity from bytes read from parity sensitive devices.
Combine the low-order 6 bits of three 8-bit bytes to make 18 bit binary words.
Some controllers allowed selecting between 18-bit and 16-bit data transfers at run time. What future hardware, and supporting system calls, your programs would find just can't be predicted in advance.
Connecting to the 8-bit Internet will be horrid enough by itself to kill any 9-bit dreams someone has. They got away with it back then as machines were less interconnected in those times.
Having something other than an even multiple of 2 bits in byte-addressed storage brings up all sorts of troubles. Example: if you want an array of thousands of bits in 8-bit bytes you can unsigned char bits[1024] = { 0 }; bits[n>>3] |= 1 << (n&7);. To fully pack 9-bits you must do actual divides, which brings horrid performance penalties. This also applies to bytes-per-word.
Any code not actually tested on 9-bit byte hardware may well fail on it's first actual venture into the land of unexpected 9-bit bytes, unless the code is so simple that refactoring it in the future for 9-bits is only a minor issue. The prior byread()/bywrite() may help here but it would likely need an additional CHAR_BIT mode setting to set the transfer mode, returning how the current controller arranges the requested bytes.
To be complete anyone who wants to worry about 9-bit bytes for the educational experience may need to also worry about one's complement systems coming back; something else that seems to have died a well deserved death (two zeros: +0 and -0, is a source of ongoing nightmares... trust me). Back then 9-bit systems often seemed to be paired with one's complement operations.
In a programming language, a byte is always 8-bits. So, if a byte representation has 9-bits on some machine, for whatever reason, its up to the C compiler to reconcile that. As long as you write text using char, - say, if you write/read 'A' to a file, you would be writing/reading only 8-bits to the file. So, you should not have any problem.
Are there many computers which use Big Endian? I just tested on 5 different computers, each purchased in different years, and different models. Each use Little Endian. Is Big Endian still used now days or was it for older processors such as the Motorola 6800?
Edit:
Thank you TreyA, intel.com/design/intarch/papers/endian.pdf is a very nice and handy article. It covers every answers bellow, and also expands upon them.
There's many processors in use today that is big endian, or allows the option to switch endian mode between big and little endian, (e.g. SPARC, PowerPC, ARM, Itanium..).
It depends on what you mean by "care about endian". You usually don't need to care that much specifically about endianess if you just program to the data you need. Endian matters when you need to communicate to the outside world, such as read/write a file, or send data over a network and you do that by reading/writing integers larger than 1 byte directly to/from memory.
When you do need to deal with external data, you need to know its format. Part of its format is to e.g. know how an integer is encoded in that data. If the format specifies that the first byte of an 4 byte integer is the most significant byte of said integer, you read that byte and place it at the most significant byte of the integer in your program, and you would be able to accomplish that fine
with code that runs on both little and big endian machines.
So it's not so much specifically about the processor endianess, but the data you need to deal with. That data might have integers stored in either "endian", you need to know which, and various data formats will use various endianess depending on some specification, or depending on the whim of the guy that came up with the format.
Big endian is still by far the most used, in terms of different architectures. In fact, outside of the Intel and the old DEC computers, I don't know of a small endian: Sparc, Power PC (IBM Unix machines), HP's Unix platforms, IBM mainframes, etc. are all big endian. But does it matter? About the only time I've had to consider endianness was when implementing some low level system routines, like modf. Otherwise, int is an integer value in a certain range, and that's it.
The following common platforms use big-endian encoding:
Java
Network data in TCP/UDP packets (maybe even on the IP level, but I'm not sure about that)
The x86/x64 CPUs are little-endian. If you are going to interface with binary data between the two, you should definitely be aware of this.
This qualifies more as a comment than an answer, but I can't comment and I think it's such a great article to read, that I think it worthwhile.
This is a classic on endianness by Danny Cohen, dating from 1980:
ON HOLY WARS AND A PLEA FOR PEACE
There is not enough context to the question. In general, you should simply be aware of it at all times, but you do not need to stress over it in everyday coding. If you plan on messing with the internal bytes of your integers, start worrying about endianness. If you plan on doing standard math on your integers, don't worry about it.
The two big places where endianness pops up is in networking (big endian standard) and binary records (have to research whether integers are stored big endian or little endian).
I'm confused with the byte order of a system/cpu/program.
So I must ask some questions to make my mind clear.
Question 1
If I only use type char in my C++ program:
void main()
{
char c = 'A';
char* s = "XYZ";
}
Then compile this program to a executable binary file called a.out.
Can a.out both run on little-endian and big-endian systems?
Question 2
If my Windows XP system is little-endian, can I install a big-endian Linux system in VMWare/VirtualBox?
What makes a system little-endian or big-endian?
Question 3
If I want to write a byte-order-independent C++ program, what do I need to take into account?
Can a.out both run on little-endian and big-endian system?
No, because pretty much any two CPUs that are so different as to have different endian-ness will not run the same instruction set. C++ isn't Java; you don't compile to something that gets compiled or interpreted. You compile to the assembly for a specific CPU. And endian-ness is part of the CPU.
But that's outside of endian issues. You can compile that program for different CPUs and those executables will work fine on their respective CPUs.
What makes a system little-endian or big-endian?
As far as C or C++ is concerned, the CPU. Different processing units in a computer can actually have different endians (the GPU could be big-endian while the CPU is little endian), but that's somewhat uncommon.
If I want to write a byte-order independent C++ program, what do I need to take into account?
As long as you play by the rules of C or C++, you don't have to care about endian issues.
Of course, you also won't be able to load files directly into POD structs. Or read a series of bytes, pretend it is a series of unsigned shorts, and then process it as a UTF-16-encoded string. All of those things step into the realm of implementation-defined behavior.
There's a difference between "undefined" and "implementation-defined" behavior. When the C and C++ spec say something is "undefined", it basically means all manner of brokenness can ensue. If you keep doing it, (and your program doesn't crash) you could get inconsistent results. When it says that something is defined by the implementation, you will get consistent results for that implementation.
If you compile for x86 in VC2010, what happens when you pretend a byte array is an unsigned short array (ie: unsigned char *byteArray = ...; unsigned short *usArray = (unsigned short*)byteArray) is defined by the implementation. When compiling for big-endian CPUs, you'll get a different answer than when compiling for little-endian CPUs.
In general, endian issues are things you can localize to input/output systems. Networking, file reading, etc. They should be taken care of in the extremities of your codebase.
Question 1:
Can a.out both run on little-endian and big-endian system?
No. Because a.out is already compiled for whatever architecture it is targeting. It will not run on another architecture that it is incompatible with.
However, the source code for that simple program has nothing that could possibly break on different endian machines.
So yes it (the source) will work properly. (well... aside from void main(), which you should be using int main() instead)
Question 2:
If my WindowsXP system is little-endian, can I install a big-endian
Linux system in VMWare/VirtualBox?
Endian-ness is determined by the hardware, not the OS. So whatever (native) VM you install on it, will be the same endian as the host. (since x86 is all little-endian)
What makes a system little-endian or big-endian?
Here's an example of something that will behave differently on little vs. big-endian:
uint64_t a = 0x0123456789abcdefull;
uint32_t b = *(uint32_t*)&a;
printf("b is %x",b)
*Note that this violates strict-aliasing, and is only for demonstration purposes.
Little Endian : b is 89abcdef
Big Endian : b is 1234567
On little-endian, the lower bits of a are stored at the lowest address. So when you access a as a 32-bit integer, you will read the lower 32 bits of it. On big-endian, you will read the upper 32 bits.
Question 3:
If I want to write a byte-order independent C++ program, what do I
need to take into account?
Just follow the standard C++ rules and don't do anything ugly like the example I've shown above. Avoid undefined behavior, avoid type-punning...
Little-endian / big-endian is a property of hardware. In general, binary code compiled for one hardware cannot run on another hardware, except in a virtualization environments that interpret machine code, and emulate the target hardware for it. There are bi-endian CPUs (e.g. ARM, IA-64) that feature a switch to change endianness.
As far as byte-order-independent programming goes, the only case when you really need to do it is to deal with networking. There are functions such as ntohl and htonl to help you converting your hardware's byte order to network's byte order.
The first thing to clarify is that endianness is a hardware attribute, not a software/OS attribute, so WinXP and Linux are not big-endian or little endian, but rather the hardware on which they run is either big-endian or little endian.
Endianness is a description of the order in which the bytes are stored in a data-type. A system that is big-endian stores the most significant (read biggest value) value first and a little-endian system stores the least significant byte first. It is not mandatory to have each datatype be the same as the others on a system so you can have mixed-endian systems.
A program that is little endian would not run on a big-endian system, but that has more to with the instruction set available than the endianness of the system on which it was compiled.
If you want to write a byte-order independent program you simply need to not depend on the byte order of your data.
1: The output of the compiler will depend on the options you give it and if you use a cross-compiler. By default, it should run on the operating system you are compiling it on and not others (perhaps not even others of the same type; not all Linux binaries run on all Linux installs, for example). In large projects, this will be the least of your concern, as libraries, etc, will need built and linked differently on each system. Using a proper build system (like make) will take care of most of this without you needing to worry.
2: Virtual machines abstract the hardware in such a way as to allow essentially anything to run within anything else. How the operating systems manage their memory is unimportant as long as they both run on the same hardware and support whatever virtualization model is in use. Endianness means the byte-order; if it is read left-right or right-left (or some other format). Some hardware supports both and virtualization allows both to coexist in that case (although I am not aware of how this would be useful except that it is possible in theory). However, Linux works on many different architectures (and Windows some other than Ixxx), so the situation is more complicated.
3: If you monkey with raw memory, such as with binary operators, you might put yourself in a position of depending on endianness. However, most modern programming is at a higher level than this. As such, you are likely to notice if you get into something which may impose endianness-based limitations. If such is ever required, you can always implement options for both endiannesses using the preprocessor.
The endianness of a system determine how the bytes are interpreted, so what bit is considered the "first" and what is considered the "last".
You need to care about it only when loading or saving from some sources external to your program, like disk or networks.