Modeling microcomputer registers in memory? - c++

As a little side project I've been creating an older microcomputer CPU emulator, mostly based off the 8080's architecture. Its 8-bit general purpose registers can (according to wikipedia) be used "as three 16-bit register pairs," as well as the normal 8-bit mode. And here's my problem.
My first try at modelling this was individual named bytes and shorts, which worked fine until I re-read the specs page and found that the 16-bit registers aren't actually their own thing. Oops.
What I'm trying now is an array of bytes, with one location for each 8-bit register, and two locations reserved for the stack/instruction pointers. This works very well and good for the 8-bit registers, and it's a lot less of a hassle to manage, but I don't actually know how to convert two bytes to a short in memory. Is that even possible? If not, do you have any suggestions on how else to do this?

Solved via casting the address of the first byte in the 16-bit register to a void pointer, and back to a short. Not very type-safe, but hey, it works. Apparently I was just googling the wrong things.

You can make a struct of two uint8_ts that is also accessible as a uint16_t by doing this:
union Register
{
uint16_t word;
struct
{
uint8_t lo, hi;
} byte;
};
This way, if you have a value in scope Register r, then r.word will access the contents as a single 16-bit value, and r.byte.lo and r.byte.hi will access the first and the second 8-bit byte. (The first one is lo because the Intel 8080 is a little-endian architecture.)

Related

How the size of a struct containing bitset fields is calculated [duplicate]

It seems for std::bitset<1 to 32>, the size is set to 4 bytes. For sizes 33 to 64, it jumps straight up to 8 bytes. There can't be any overhead because std::bitset<32> is an even 4 bytes.
I can see aligning to byte length when dealing with bits, but why would a bitset need to align to word length, especially for a container most likely to be used in situations with a tight memory budget?
This is under VS2010.
The most likely explanation is that bitset is using a whole number of machine words to store the array.
This is probably done for memory bandwidth reasons: it is typically relatively cheap to read/write a word that's aligned at a word boundary. On the other hand, reading (and especially writing!) an arbitrarily-aligned byte can be expensive on some architectures.
Since we're talking about a fixed-sized penalty of a few bytes per bitset, this sounds like a reasonable tradeoff for a general-purpose library.
I assume that indexing into the bitset is done by grabbing a 32-bit value and then isolating the relevant bit because this is fastest in terms of processor instructions (working with smaller-sized values is slower on x86). The two indexes needed for this can also be calculated very quickly:
int wordIndex = (index & 0xfffffff8) >> 3;
int bitIndex = index & 0x7;
And then you can do this, which is also very fast:
int word = m_pStorage[wordIndex];
bool bit = ((word & (1 << bitIndex)) >> bitIndex) == 1;
Also, a maximum waste of 3 bytes per bitset is not exactly a memory concern IMHO. Consider that a bitset is already the most efficient data structure to store this type of information, so you would have to evaluate the waste as a percentage of the total structure size.
For 1025 bits this approach uses up 132 bytes instead of 129, for 2.3% overhead (and this goes down as the bitset site goes up). Sounds reasonable considering the likely performance benefits.
The memory system on modern machines cannot fetch anything else but words from memory, apart from some legacy functions that extract the desired bits. Hence, having the bitsets aligned to words makes them a lot faster to handle, because you do not need to mask out the bits you don't need when accessing it. If you do not mask, doing something like
bitset<4> foo = 0;
if (foo) {
// ...
}
will most likely fail. Apart from that, I remember reading some time ago that there was a way to cramp several bitsets together, but I don't remember exactly. I think it was when you have several bitsets together in a structure that they can take up "shared" memory, which is not applicable to most use cases of bitfields.
I had the same feature in Aix and Linux implementations. In Aix, internal bitset storage is char based:
typedef unsigned char _Ty;
....
_Ty _A[_Nw + 1];
In Linux, internal storage is long based:
typedef unsigned long _WordT;
....
_WordT _M_w[_Nw];
For compatibility reasons, we modified Linux version with char based storage
Check which implementation are you using inside bitset.h
Because a 32 bit Intel-compatible processor cannot access bytes individually (or better, it can by applying implicitly some bit mask and shifts) but only 32bit words at time.
if you declare
bitset<4> a,b,c;
even if the library implements it as char, a,b and c will be 32 bit aligned, so the same wasted space exist. But the processor will be forced to premask the bytes before letting bitset code to do its own mask.
For this reason MS used a int[1+(N-1)/32] as a container for the bits.
Maybe because it's using int by default, and switches to long long if it overflows? (Just a guess...)
If your std::bitset< 8 > was a member of a structure, you might have this:
struct A
{
std::bitset< 8 > mask;
void * pointerToSomething;
}
If bitset<8> was stored in one byte (and the structure packed on 1-byte boundaries) then the pointer following it in the structure would be unaligned, which would be A Bad Thing. The only time when it would be safe and useful to have a bitset<8> stored in one byte would be if it was in a packed structure and followed by some other one-byte fields with which it could be packed together. I guess this is too narrow a use case for it to be worthwhile providing a library implementation.
Basically, in your octree, a single byte bitset would only be useful if it was followed in a packed structure by another one to three single-byte members. Otherwise, it would have to be padded to four bytes anyway (on a 32-bit machine) to ensure that the following variable was word-aligned.

How do I serialize a struct of 8bit integers for socket programming?

I have a struct that has 5 unsigned 8 bit integers that mimics a frame with 5 packets. After researching, I know need to serialize the data, field by field, especially since I am sending from a Windows machine to a Linux machine and back.
here is my struct:
typedef struct pressure{
UINT8 a;
UINT8 b;
UINT8 c;
UINT8 d;
UINT8 e;
}pressure;
The issue is I cant use htons() since my members must be 8 bits. How do I manually serialize this? It would be greatly appreciated if you could provide a short code sample that shows how to serialize and what to pass to send().
You can either write each individual byte using ostream::put, or - if you've ensured they're contiguous in memory in pressure (which will be done on every compiler I've ever used without you doing anything actively) - write the lot of them using ostream::write, as in:
my_ostream.write(static_cast<const char*>(&my_pressure.a), 5);
That said, consider keeping the values in an array so you're guaranteed they're contiguous in memory.
You don't need htonX / ntohX etc. - they're for normalising/denormalising multi-byte integer representations, which you don't have here.
Just write it if you are sending from an Intel x86 based machine to another Intel x86 machine (which most linuxes are).
If however you plan to send it to a machine based on another processor the safest thing is to just send ASCI characters so something like:
char[] buffer = sprintf("|%03d|%03d|%03d|%03d|%03d|",a,b,c,d,e);
Would give you a fixed length string readable by any machine. Its a good idea to have some sort of field separator (the "|" in this case) to help the receiver verify the string has not been garbled by an unreliable network.

Data conversion for ARM platform (from x86/x64)

We have developed win32 application for x86 and x64 platform. We want to use the same application on ARM platform. Endianness will vary for ARM platform i.e. ARM platform uses Big endian format in general. So we want to handle this in our application for our device.
For e.g. // In x86/x64, int nIntVal = 0x12345678
In ARM, int nIntVal = 0x78563412
How values will be stored for the following data types in ARM?
double
char array i.e. char chBuffer[256]
int64
Please clarify this.
Regards,
Raphel
Endianess only matters for register <-> memory operations.
In a register there is no endianess. If you put
int nIntVal = 0x12345678
in your code it will have the same effect on any endianess machine.
all IEEE formats (float, double) are identical in all architectures, so this does not matter.
You only have to care about endianess in two cases:
a) You write integers to files that have to be transferable between the two architectures.
Solution: Use the hton*, ntoh* family of converters, use a non-binary file format (e.g. XML) or a standardised file format (e.g. SQLite).
b) You cast integer pointers.
int a = 0x1875824715;
char b = a;
char c = *(char *)&a;
if (b == c) {
// You are working on Little endian
}
The latter code by the way is a handy way of testing your endianess at runtime.
Arrays and the likes if you use write, fwrite falimies of calls to transfer them you will have no problems unless they contain integers: then look above.
int64_t: look above. Only care if you have to store them binary in files or cast pointers.
(Sergey L., above, says, taht you mostly don't have to care for the byte order. He is right, with at least 1 exception: I assumed you want to convert binary data from one platform to the other ...)
http://en.wikipedia.org/wiki/Endianness has a good overview.
In short:
Little endian means, the least significant byte is stored first (at the lowest address)
Big endian means the most significant byte is stored first
The order in which array elements are stored is not affected (but the byte order in array elements, of course)
So
char array is unchanged
int64 - byte order is reversed compared to x86
With regard to the floating point format, consider http://en.wikipedia.org/wiki/Endianness#Floating-point_and_endianness. Generally it seems to obey the same rules of endianness as the integer format, but there is are exceptions for older ARM platforms. (I've no first hand experience of that).
Generally I'd suggest, test your conversion of primitive types by controlled experiments first.
Also consider, that compilers might use different padding in structs (a topic you haven't addressed yet).
Hope this helps.
In 98% cases you don't need to care about endianness. Unless you need to transfer some data between systems of different endiannness, or read/write some endian-sensitive file format, you should not bother with it. And even in those cases, you can write your code to perform properly when compiled under any endianness.
From Rob Pike's "The byte order fallacy" post:
Let's say your data stream has a little-endian-encoded 32-bit integer.
Here's how to extract it (assuming unsigned bytes):
i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
If it's big-endian, here's how to extract it:
i = (data[3]<<0) | (data[2]<<8) | (data[1]<<16) | (data[0]<<24);
Both these snippets work on any machine, independent of the machine's
byte order, independent of alignment issues, independent of just about
anything. They are totally portable, given unsigned bytes and 32-bit
integers.
The arm is little endian, it has two big endian variants depending on architecture, but it is better to just run native little endian, the tools and volumes of code out there are more fully tested in little endian mode.
Endianness is just one factor in system engineering if you do your system engineering it all works out, no fears, no worries. Define your interfaces and code to that design. Assuming for example that one processors endianness automatically results in having to byteswap is a bad assumption and will bite you eventually. You will end up having to swap an even number of times to undo other bad assumptions that cause a swap (ideally swapping zero times of course rather than 2 or 4 or 6, etc times). If you have any endian concerns at all when writing code you should write it endian independent.
Since some ARMs have BE32 (word invariant) and the newer arms BE8 (byte invariant) you would have to do even more work to try to make something generic that also tries to compensate for little intel, little arm, BE32 arm and BE8 arm. Xscale tends to run big endian natively but can be run as little endian to reduce the headaches. You may be assuming that because an ARM clone is big endian then all are big endian, another bad assumption.

Why is std::bitset<8> 4 bytes big?

It seems for std::bitset<1 to 32>, the size is set to 4 bytes. For sizes 33 to 64, it jumps straight up to 8 bytes. There can't be any overhead because std::bitset<32> is an even 4 bytes.
I can see aligning to byte length when dealing with bits, but why would a bitset need to align to word length, especially for a container most likely to be used in situations with a tight memory budget?
This is under VS2010.
The most likely explanation is that bitset is using a whole number of machine words to store the array.
This is probably done for memory bandwidth reasons: it is typically relatively cheap to read/write a word that's aligned at a word boundary. On the other hand, reading (and especially writing!) an arbitrarily-aligned byte can be expensive on some architectures.
Since we're talking about a fixed-sized penalty of a few bytes per bitset, this sounds like a reasonable tradeoff for a general-purpose library.
I assume that indexing into the bitset is done by grabbing a 32-bit value and then isolating the relevant bit because this is fastest in terms of processor instructions (working with smaller-sized values is slower on x86). The two indexes needed for this can also be calculated very quickly:
int wordIndex = (index & 0xfffffff8) >> 3;
int bitIndex = index & 0x7;
And then you can do this, which is also very fast:
int word = m_pStorage[wordIndex];
bool bit = ((word & (1 << bitIndex)) >> bitIndex) == 1;
Also, a maximum waste of 3 bytes per bitset is not exactly a memory concern IMHO. Consider that a bitset is already the most efficient data structure to store this type of information, so you would have to evaluate the waste as a percentage of the total structure size.
For 1025 bits this approach uses up 132 bytes instead of 129, for 2.3% overhead (and this goes down as the bitset site goes up). Sounds reasonable considering the likely performance benefits.
The memory system on modern machines cannot fetch anything else but words from memory, apart from some legacy functions that extract the desired bits. Hence, having the bitsets aligned to words makes them a lot faster to handle, because you do not need to mask out the bits you don't need when accessing it. If you do not mask, doing something like
bitset<4> foo = 0;
if (foo) {
// ...
}
will most likely fail. Apart from that, I remember reading some time ago that there was a way to cramp several bitsets together, but I don't remember exactly. I think it was when you have several bitsets together in a structure that they can take up "shared" memory, which is not applicable to most use cases of bitfields.
I had the same feature in Aix and Linux implementations. In Aix, internal bitset storage is char based:
typedef unsigned char _Ty;
....
_Ty _A[_Nw + 1];
In Linux, internal storage is long based:
typedef unsigned long _WordT;
....
_WordT _M_w[_Nw];
For compatibility reasons, we modified Linux version with char based storage
Check which implementation are you using inside bitset.h
Because a 32 bit Intel-compatible processor cannot access bytes individually (or better, it can by applying implicitly some bit mask and shifts) but only 32bit words at time.
if you declare
bitset<4> a,b,c;
even if the library implements it as char, a,b and c will be 32 bit aligned, so the same wasted space exist. But the processor will be forced to premask the bytes before letting bitset code to do its own mask.
For this reason MS used a int[1+(N-1)/32] as a container for the bits.
Maybe because it's using int by default, and switches to long long if it overflows? (Just a guess...)
If your std::bitset< 8 > was a member of a structure, you might have this:
struct A
{
std::bitset< 8 > mask;
void * pointerToSomething;
}
If bitset<8> was stored in one byte (and the structure packed on 1-byte boundaries) then the pointer following it in the structure would be unaligned, which would be A Bad Thing. The only time when it would be safe and useful to have a bitset<8> stored in one byte would be if it was in a packed structure and followed by some other one-byte fields with which it could be packed together. I guess this is too narrow a use case for it to be worthwhile providing a library implementation.
Basically, in your octree, a single byte bitset would only be useful if it was followed in a packed structure by another one to three single-byte members. Otherwise, it would have to be padded to four bytes anyway (on a 32-bit machine) to ensure that the following variable was word-aligned.

Is there a relation between integer and register sizes?

Recently, I was challenged in a recent interview with a string manipulation problem and asked to optimize for performance. I had to use an iterator to move back and forth between TCHAR characters (with UNICODE support - 2bytes each).
Not really thinking of the array length, I made a curial mistake with not using size_t but an int to iterate through. I understand it is not compliant and not secure.
int i, size = _tcslen(str);
for(i=0; i<size; i++){
// code here
}
But, the maximum memory we can allocate is limited. And if there is a relation between int and register sizes, it may be safe to use an integer.
E.g.: Without any virtual mapping tools, we can only map 2^register-size bytes. Since TCHAR is 2 bytes long, half of that number. For any system that has int as 32-bits, this is not going to be a problem even if you dont use an unsigned version of int. People with embedded background used to think of int as 16-bits, but memory size will be restricted on such a device. So I wonder if there is a architectural fine-tuning decision between integer and register sizes.
The C++ standard doesn't specify the size of an int. (It says that sizeof(char) == 1, and sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long).
So there doesn't have to be a relation to register size. A fully conforming C++ implementation could give you 256 byte integers on your PC with 32-bit registers. But it'd be inefficient.
So yes, in practice, the size of the int datatype is generally equal to the size of the CPU's general-purpose registers, since that is by far the most efficient option.
If an int was bigger than a register, then simple arithmetic operations would require more than one instruction, which would be costly. If they were smaller than a register, then loading and storing the values of a register would require the program to mask out the unused bits, to avoid overwriting other data. (That is why the int datatype is typically more efficient than short.)
(Some languages simply require an int to be 32-bit, in which case there is obviously no relation to register size --- other than that 32-bit is chosen because it is a common register size)
Going strictly by the standard, there is no guarantee as to how big/small an int is, much less any relation to the register size. Also, some architectures have different sizes of registers (i.e: not all registers on the CPU are the same size) and memory isn't always accessed using just one register (like DOS with its Segment:Offset addressing).
With all that said, however, in most cases int is the same size as the "regular" registers since it's supposed to be the most commonly used basic type and that's what CPUs are optimized to operate on.
AFAIK, there is no direct link between register size and the size of int.
However, since you know for which platform you're compiling the application, you can define your own type alias with the sizes you need:
Example
#ifdef WIN32 // Types for Win32 target
#define Int16 short
#define Int32 int
// .. etc.
#elif defined // for another target
Then, use the declared aliases.
I am not totally aware, if I understand this correct, since some different problems (memory sizes, allocation, register sizes, performance?) are mixed here.
What I could say is (just taking the headline), that on most actual processors for maximum speed you should use integers that match register size. The reason is, that when using smaller integers, you have the advantage of needing less memory, but for example on the x86 architecture, an additional command for conversion is needed. Also on Intel you have the problem, that accesses to unaligned (mostly on register-sized boundaries) memory will give some penality. Off course, on todays processors things are even more complex, since the CPUs are able to process commands in parallel. So you end up fine tuning for some architecture.
So the best guess -- without knowing the architectore -- speeedwise is, to use register sized ints, as long you can afford the memory.
I don't have a copy of the standard, but my old copy of The C Programming Language says (section 2.2) int refers to "an integer, typically reflecting the natural size of integers on the host machine." My copy of The C++ Programming Language says (section 4.6) "the int type is supposed to be chosen to be the most suitable for holding and manipulating integers on a given computer."
You're not the only person to say "I'll admit that this is technically a flaw, but it's not really exploitable."
There are different kinds of registers with different sizes. What's important are the address registers, not the general purpose ones. If the machine is 64-bit, then the address registers (or some combination of them) must be 64-bits, even if the general-purpose registers are 32-bit. In this case, the compiler may have to do some extra work to actually compute 64-bit addresses using multiple general purpose registers.
If you don't think that hardware manufacturers ever make odd design choices for their registers, then you probably never had to deal with the original 8086 "real mode" addressing.