Can someone explain this for me?
Addresses are for individual bytes (8 bits)
I have pasted the entire paragraph below:
The MIPS has a 32 bit architecture, with 32 bit instructions, a 32
bit data word, and 32 bit addresses.
It has 32 addressable internal registers requiring a 5 bit register address. Register 0 always has the the constant value 0.
Addresses are for individual bytes (8 bits) but instructions must have
addresses which are a multiple of 4. This is usually stated as “instructions must be word aligned in memory.
Link to pdf:
http://web.cs.mun.ca/~paul/cs3725/material/review.pdf
In the code below, I don't understand IMem[i] = bitset<8>(line)
explain this “Addresses are for individual bytes (8 bits)” for me?
It means that size of a byte is 8 bits. Two adjacent addresses will be 8 bits apart. A 32 bit word consists of 4 bytes.
Furthermore it means that - even though address operands of instructions must be aligned to a 4 byte boundary as explained in the following sentence - each byte has a unique address.
By unique address, do you mean unique 5 bit values?
No. The memory addresses are 32 bit values.
where are addresses usually saved?
Where-ever any values are saved. In the given description, two possible places have been described: In memory, or in a register.
Related
If an int is stored in memory in 4 bytes each if which having a unique address, which address of these four addresses does a pointer to that int store?
A pointer to int (a int*) stores the address of the first byte of the integer. The size of int is known to the compiler, so it just needs to know where it starts.
How the bytes of the int are interpreted depends on the endianness of your machine, but that doesn't change the fact that the pointer just stores the starting address (the endianness is also known to the compiler).
Those 4 int bytes are not stored in the random locations - they are consecutive. So it is enough to store the reference (address) of first byte of the object.
Depends on the architecture. On a big-endian architecture (M68K, IBM z series), it’s usually the address of the most significant byte. On a little-endian architecture (x86), it’s usually the address of the least-significant byte:
A A+1 A+2 A+3 big-endian
+–––+–––+–––+–––+
|msb| | |lsb|
+–––+–––+–––+–––+
A+3 A+2 A+1 A little-endian
There may be other oddball addressing schemes I’m leaving out.
But basically it’s whatever the underlying architecture considers the “first” byte of the word.
The C Standard does not specify how addresses are represented inside pointers. Yet on most current architectures, a pointer to an int stores its address as the offset in the process' memory space of the first byte of memory used to store it, more precisely the byte with the lowest address.
Note however these remarks:
the int may have more or fewer than 32 bits. The only constraint is it must have at least 15 value bits and a sign bit.
bytes may have more than 8 bits. Most current architectures use 8-bit bytes but early Unix systems had 9-bit bytes, and some DSP systems have 16-, 24- or even 32-bit bytes.
when an int is stored using multiple bytes, it is unspecified how its bits are split among these bytes. Many systems use little-endian representation where the least-significant bits are in the first byte, other systems use big-endian representation where the most significant bits and the sign bit are in the first byte. Other representations are possible but only in theory.
many systems require that the address of the int be aligned on a multiple of their size.
how pointers are stored in memory is also system specific and unspecified. Addresses do not necessarily represent offsets in memory, real of virtual. For example 64-bit pointers on x86 CPUs have a number of bits that can be ignored or that may contain a cryptographic signature verified on the fly by the CPU. Adding one to the stored value of a pointer does not necessarily produce a valid pointer.
If an int is stored in memory in 4 bytes which each has a unique address, which address of these four addresses does a Pointer to that int store?
A pointer to int usually stores the address value of the first byte (which is stored at the lowest memory address) of the int object.
Hence the size of the int is known and constant at a specific implementation/ architecture and an int object is always stored in consecutive bytes (there are no bytes between two of them), it is clear that the following ((if sizeof(int) == 4) three) bytes belong to the same int object.
How the bytes of the int object are interpreted is dependent upon Endianness*.
The first byte is usually automatically aligned on a multiple of the data word size dependent upon a specific architecture, so that the CPU can work most efficiently.
In a 32-bit architecture for example, when the data word size is 4, the first byte lies on a 4-byte boundary - an address location with a multiple of 4.
sizeof(int) is not always 4 (although common) by the way.
*Endianness can influence if the interpretation of the object starts at the most (the first) or least (the last) significant byte.
i have found in the c167 Dokumentation a note on arithmetic of pointers.
There are two macros _huge and _shuge.
A cite from the Doku:
_huge or _shuge. Huge data may be anywhere in memory and you can
also reference it using a 24 bit address. However, address arithmetic
is
done using the complete address (24 bit). Shuge data may also be
anywhere in memory and you can also reference it using a 24 bit
address.
However, address arithmetic is done using a 16 bit address.
So what is the difference in the usage of _huge vs _shuge?
In my understanding the arithmetic of pointers is using an offset from a start address
Example of what I understood so far:
&a[0] + 1 where one element of a is int32 &a[0] gives me the address
of the first element thi
s would be equal to 0x1234211 + 32Bit for
example.**
Is there a difference considering the Note from above and what is the difference in _huge and _shuge?
best regards
Huge was used in the (good?) old 8086 family mode addressing. These were 16 bit processors with a 24 bits address bus. A full address was given by a segment (16 bits) address and an offset (again 16 bits), with the following formula:
linear_address = segment * 16 + offset
The difference between 2 _huge adresses was computed by first converting both to 24 bits linear addresses and substracting that value, while for _shuge one, segment and offset were separately substracted.
Example 0010:1236 - 0011:1234 would give 0000:0012 (18) if computed as _huge and 0001:0002 as _shuge
It's obliquely explained on the 17th page (labeled as page 7) of this PDF: https://www.tasking.com/support/c166/c166_user_guide_v4.0.pdf
By default all __far pointer arithmetic is 14-bit. This implies that comparison of __far pointers is also
done in 14-bit. For __shuge the same is true, but then with 16-bit arithmetic.This saves code significantly,
but has the following implications:
• Comparing pointers to different objects is not reliable. It is only reliable when it is known that these
objects are located in the same page.
• Comparing with NULL is not reliable. Objects that are located in another page at offset 0x0000 have
the low 14 bits (the page offset) zero and will also be evaluated as NULL.
In other words, _shuge pointers' bits above the lowest 16 are ignored except when dereferencing them. You may also note that _shuge pointers have 16-bit alignment, meaning their lowest 4 bits are always zero and therefore only 12 bits need to be considered in comparison or subtraction.
I want to know how bitset actually allocates memory. I read from some blog that it takes up memory in bits. However when i run the following code:
bitset<3> bits = 001;
cout<<sizeof(bits);
I get the output as 4. What is the explanation behind it?
Also is there a method to allocate space in bits in C++?
You can approximate sizeof(bitset<N>) as:
If internal representation is 32bit (like unsigned on 32bit systems) as 4 * ((N + 31) / 32)
If internal representation is 64bit (like unsigned long on 64bit systems) as 8 * ((N + 63) / 64)
It seems that the first is true: 4 * ((3 + 31) / 32) is 4
I get the output as 4. What is the explanation behind it?
There is no information in standard about how bitset should be realized. It's implementation defined, look at bitset header of your compiler.
Also is there a method to allocate space in bits in C++?
No, there is no method to allocate space in bits in C++.
Your CPU doesn't operate with individual bits, but with bytes and words. In your case, sizeof(bits) results in 4 because compiler decided to align this datastructure to 4 bytes.
Typically on a 32-bit processor, the compiler will make the allocated memory size a multiple of 4 bytes, and so the nearest multiple of 4 greater than 3/8 is 4 bytes.
You cannot address separate bits, the lowest adressable unit is byte. So no, you cannot allocate bits precisely.
Another thing is padding - you almost always get more bytes allocated that you asked for, this is for optimalization purposes. Addressing bytes not on 32b boundaries is often expensive, addressing bytes on x64 CPU that are not on 64b boundaries results in exception. (speaking of Intel platform.)
I am trying to find the difference becoz of byte flip functionality I see in Calculator on Mac with Programmer`s view.
So I wrote a program to byte swap a value which we do to go from small to big endian or other way round and I call it as byte swap. But when I see byte flip I do not understand what exactly it is and how is it different than byte swap. I did confirm that the results are different.
For example, for an int with value 12976128
Byte Flip gives me 198;
Byte swap gives me 50688.
I want to implement an algorithm for byte flip since 198 is the value I want to get while reading something. Anything on google says byte flip founds the help byte swap which isnt the case for me.
Byte flip and byte swap are synonyms.
The results you see are just two different ways of swapping the bytes, depending on whether you look at the number as a 32bit number (consisting of 4 bytes), or as the smallest size of a number that can hold 12976128, which is 24 bits or 3 bytes.
The 4byte swap is more usual in computer culture, because 32bit processors are currently predominant (even 64bit architectures still do most of their mathematics in 32bit numbers, partly because of backward compatible software infrastructure, partly because it is enough for many practical purposes). But the Mac Calculator seems to use the minimum-width swap, in this case a 3 byte swap.
12976128, when converted to hexadecimal, gives you 0xC60000. That's 3 bytes total ; each hexadecimal digit is 4 bits, or half a byte wide. The bytes to be swapped are 0xC6, zero, and another zero.
After 3byte swap: 0x0000C6 = 198
After 4byte swap: 0x0000C600 = 50688
I want to know how bitset actually allocates memory. I read from some blog that it takes up memory in bits. However when i run the following code:
bitset<3> bits = 001;
cout<<sizeof(bits);
I get the output as 4. What is the explanation behind it?
Also is there a method to allocate space in bits in C++?
You can approximate sizeof(bitset<N>) as:
If internal representation is 32bit (like unsigned on 32bit systems) as 4 * ((N + 31) / 32)
If internal representation is 64bit (like unsigned long on 64bit systems) as 8 * ((N + 63) / 64)
It seems that the first is true: 4 * ((3 + 31) / 32) is 4
I get the output as 4. What is the explanation behind it?
There is no information in standard about how bitset should be realized. It's implementation defined, look at bitset header of your compiler.
Also is there a method to allocate space in bits in C++?
No, there is no method to allocate space in bits in C++.
Your CPU doesn't operate with individual bits, but with bytes and words. In your case, sizeof(bits) results in 4 because compiler decided to align this datastructure to 4 bytes.
Typically on a 32-bit processor, the compiler will make the allocated memory size a multiple of 4 bytes, and so the nearest multiple of 4 greater than 3/8 is 4 bytes.
You cannot address separate bits, the lowest adressable unit is byte. So no, you cannot allocate bits precisely.
Another thing is padding - you almost always get more bytes allocated that you asked for, this is for optimalization purposes. Addressing bytes not on 32b boundaries is often expensive, addressing bytes on x64 CPU that are not on 64b boundaries results in exception. (speaking of Intel platform.)