The below example does not work as I expected (using ">" in the conditional) with GCC on an 8 bit machine, and neither on Linux 64 bit. If I make Timer_Secs and FutureTimeout_1Minute f.ex. 16 bits wide (int in those cases is 16 bits) it works as it should (using ">" in the conditional).
So, to get this to work for 8 bits with sign I seem to have to use "==" in the conditional.
{
// Example with 8-bit only as tested with GCC 3.2 on AVR ATmega 128
unsigned int Cnt1 = 0; // 16 bits (kind of system clock)
unsigned int Cnt2 = 0; // 16 bits (observer, see what happens)
signed char Timer_Secs = 0; // 8 bits with sign
signed char FutureTimeout_1Minute = 60; // 8 bits with sign
while (Cnt1 < 602)
{ // ##
if ((Timer_Secs - FutureTimeout_1Minute) == 0)
{
FutureTimeout_1Minute += 60; // Overflow/underflow allowed,
// so wraps and changes sign
Cnt2++;
} else {} // No code
Timer_Secs++;
Cnt1++;
}
// ##
// Cnt2 is 35 if > 0 above #define AFTER_F(a,b) ((a-b)>0)
// Cnt2 is 35 if >= 0 above #define AFTER_EQ_F(a,b) ((a-b)>=0)
// Cnt2 is 10 if == 0 above #define AFTER_ON_F(a,b) ((a-b)==0) **EXPECTED!**
}
The compare seems to involve some kind of sign conversion. I have not studied the assembly code for the different cases, since I thought this could be solved at code level.
Why is this so?
To me this looks plain wrong?
Still I assume it's compiled like this by design?
Is there some kind of type transfer or type decoration I could have done?
The context here is a blog note called "Know your timer's type" at: [blog]: http://www.teigfam.net/oyvind/home/technology/109-know-your-timers-type/
Disclaimer: this is a private blog with no money, ads, free things for me or etc. involved. Neither I nor anybody get anything from clicking on this url nor any of the urls in any of my blog notes.
It works as expected in Go, see blog note or http://play.golang.org/p/lH2SzZQEHG
It does not seem to work as expected in C++ either.
Related
I have this C code:
#include <stdio.h>
#include <windows.h>
int main() {
int i;
while(1) {
for (i = 8; i <= 190; i++) {
if (GetAsyncKeyState(i) == -32767) // That number doesnt actually work, only works with -32768 (which checks to see if the key is pressed down)
printf("%c\n",i); // When using -32768, it prints the key if it is held down about 20 times
}
}
return 0;
}
so -32767 doesnt work in that C code, but I have this C++ code:
#include <iostream>
using namespace std;
#include <windows.h>
int main() {
char i;
while (1) {
for(i = 8; i <= 190; i++) {
if (GetAsyncKeyState(i) == -32767) // works as intended, prints the pressed key letter (doesnt have to be held down)
cout << i << endl;
}
}
return 0;
}
Which works with -32767. This is very confusing as both of these are being ran on the same computer with the same command: clang++ test.c
Output of C with -32768(pressing A):
A
A
A
A
A
A
A
A
A
A
A
output of C++ code with -32767:
A
B
C
D
E
F
G
H
Output of C code with -32767:
(Nothing)
According to the documentation of GetAsyncKeyState:
If the function succeeds, the return value specifies whether the key was pressed since the last call to GetAsyncKeyState, and whether the key is currently up or down. If the most significant bit is set, the key is down, and if the least significant bit is set, the key was pressed after the previous call to GetAsyncKeyState. However, you should not rely on this last behavior; for more information, see the Remarks.
This says that it is not reliable to treat -32768 differently to -32767. It also says nothing about all of the bits in between, but your code is assuming they are 1 bits without justification.
To be reliable, your code should only do the following tests on the return value:
>= 0 - key currently up, or info unavailable
< 0 - key currently down
Your code relies on implementation-defined behavior. It appears that your C compiler is using unsigned chars, while your C++ compiler uses signed ones. That is why the nested loop in C goes all the way to 190, while the same loop in C++ wraps around to zero upon reaching 128.
You can fix this by making the type of i an unsigned char in both implementations. You could also make i an int, and add a cast to char in the call of GetAsyncKeyState function.
See this topic:
http://www.cplusplus.com/forum/general/141404/
-32767 is actually 1000 0000 0000 0001 in binary, when you compare the returned value from GetAsyncKeyState(i), you're basically asking if only the left and right bits are on, but as it is said in the link above, the bits 1-14 may not always be zero.
A more proper expression would be like that:
if (GetAsyncKeyState(i) & -32767)
or maybe use a hex literal instead:
if (GetAsyncKeyState(i) & 0x8001)
Actually the reason why the C implementation didnt work was because I was manually executing ./a in the mingw32 terminal and for some reason it didn't print anything until after alot of keys have been pressed, so I just executed a.exe by double-clicking and it worked, this has been a very weird experience :/
Update to this - seems like there are some issues with trig functions in math.h (using MPIDE compiler)- it is no wonder I couldn't see this with my debugger which was using its own math.h and therefore giving me the expected (correct solutions). I found this out by accident on the microchip boards and instead implemented a 'fast sine/cosine' algorithm instead (see devmaster dot com for this). My ISR and ColourWheel array now work perfectly.
I must say that, as a fairly newcomer to C/C++ I have spent a lot of hours reviewing and re-reviewing my own code for errors. The last possible thing on my mind was that some very basic functions that were no doubt written decades ago could give such problems.
I suppose I would have seen the problem earlier myself if I'd had access to a screen dump of the actual array but, as my chip is connected to my led cube I've no way to access the data in the chip directly.
Hey, ho !! - when I get the chance I'll post a link to a u tube video showing the wave function that I've now been able to program and looks pretty good on my LED cube.
Russell
ps
Thank you all so very much for your help here - it stopped me giving up completely by giving me some avenues to chase down - certainly did not know much about endianess before this so learned about that and some systematic ways to go about a robust debugging approach.
I have a problem when trying to access an array in an interrupt routine.
The following is a snippet of code from inside the ISroutine.
if (CubeStatusArray[x][y][Layer]){
for(int8_t bitpos=7; bitpos >= 0; bitpos--){
if((ColourWheel[Colour]>>16)&(1<<bitpos)) { // This line seems to cause trouble
setHigh(SINRED_PORT,SINRED_PIN);
}
else {
setLow(SINRED_PORT,SINRED_PIN);
}
}
}
..........
ColourWheel[Colour] has been declared as follows at the start of my program (outside any function)
static volatile uint32_t ColourWheel[255]; //this is the array from which
//the colours can be obtained -
//all set as 3 eight bit numbers
//using up 24 bits of a 32bit
//unsigned int.
What this snippet of code is doing is taking each bit of an eight bit segment of the code and setting the port/pin high or low accordingly with MSB first (I then have some other code which updates a TLC5940 IC LED driver chip for each high/low on the pin and the code goes on to take the green and blue 8 bits in a similar way).
This does not work and my colours output to my LEDs behave incorrectly.
However, if I change the code as follows then the routine works
if (CubeStatusArray[x][y][Layer]){
for(int8_t bitpos=7; bitpos >= 0; bitpos--){
if(0b00000000111111111110101010111110>>16)&(1<<bitpos)) { // This line seems to cause trouble
setHigh(SINRED_PORT,SINRED_PIN);}
else {
setLow(SINRED_PORT,SINRED_PIN);
}
}
}
..........
(the actual binary number in the line is irrelevant (the first 8 bits are always zero, the next 8 bits represent a red colour, the next a blue colour etc)
So why does the ISR work with the fixed number but not if I try to use a number held in an array.??
Following is the actual code showing the full RGB update:
if (CubeStatusArray[x][y][Layer]){
for(int8_t bitpos=7; bitpos >= 0; bitpos--){
{if((ColourWheel[Colour]>>16)&(1<<bitpos))
{setHigh(SINRED_PORT,SINRED_PIN);}
else
{setLow(SINRED_PORT,SINRED_PIN);}}
{if((ColourWheel[Colour]>>8)&(1<<bitpos))
{setHigh(SINGREEN_PORT,SINGREEN_PIN);}
else
{setLow(SINGREEN_PORT,SINGREEN_PIN);}}
{if((ColourWheel[Colour])&(1<<bitpos))
{setHigh(SINBLUE_PORT,SINBLUE_PIN);}
else
{setLow(SINBLUE_PORT,SINBLUE_PIN);}}
pulse(SCLK_PORT, SCLK_PIN);
pulse(GSCLK_PORT, GSCLK_PIN);
Data_Counter++;
GSCLK_Counter++; }
I assume the missing ( after if is a typo.
The indicated research technique, in the absence of a debugger, is:
Confirm one more time that test if( ( 0b00000000111111111110101010111110 >> 16 ) & ( 1 << bitpos ) ) works. Collect (print) the result for each bitpos
Store 0b00000000111111111110101010111110 in element 0 of the array. Repeat with if( ( ColourWheel[0] >> 16 ) & ( 1 << bitpos ) ). Collect results and compare with base case.
Store 0b00000000111111111110101010111110 in all elements of the array. Repeat with if( ( ColourWheel[Colour] >> 16 ) & ( 1 << bitpos ) ) for several different Colour values (assigned manually, though). Collect results and compare with base case.
Store 0b00000000111111111110101010111110 in all elements of the array. Repeat with if( ( ColourWheel[Colour] >> 16 ) & ( 1 << bitpos ) ) with a value for Colour normally assigned. Collect results and compare with base case.
Revert to the original program and retest. Collect results and compare with base case.
Confident that the value in ColourWheel[Colour] is not as expected or unstable. Validate the index range and access once. Code speed enhancement included.
[Edit] If the receiving end does not like the slower signal changes caused by replacing a constant with ColourWheel[Colour]>>16, more effcient code may solve this.
if (CubeStatusArray[x][y][Layer]){
uint32_t value = 0;
uint32_t maskR = 0x800000UL;
uint32_t maskG = 0x8000UL;
uint32_t maskB = 0x80UL;
if ((Colour >= 0) && (Colour < 255)) {
value = ColourWheel[Colour];
}
// All you need to do is shift 'value'
for(int8_t bitpos=7; bitpos >= 0; bitpos--){
{ if( (value & maskR) // set red
}
{ if( (value & maskG) // set green
}
{ if( (value & maskB) // set blue
}
value <<= 1;
}
I have a long list of numbers between 0 and 67600. Now I want to store them using an array that is 67600 elements long. An element is set to 1 if a number was in the set and it is set to 0 if the number is not in the set. ie. each time I need only 1bit information for storing the presence of a number. Is there any hack in C/C++ that helps me achieve this?
In C++ you can use std::vector<bool> if the size is dynamic (it's a special case of std::vector, see this) otherwise there is std::bitset (prefer std::bitset if possible.) There is also boost::dynamic_bitset if you need to set/change the size at runtime. You can find info on it here, it is pretty cool!
In C (and C++) you can manually implement this with bitwise operators. A good summary of common operations is here. One thing I want to mention is its a good idea to use unsigned integers when you are doing bit operations. << and >> are undefined when shifting negative integers. You will need to allocate arrays of some integral type like uint32_t. If you want to store N bits, it will take N/32 of these uint32_ts. Bit i is stored in the i % 32'th bit of the i / 32'th uint32_t. You may want to use a differently sized integral type depending on your architecture and other constraints. Note: prefer using an existing implementation (e.g. as described in the first paragraph for C++, search Google for C solutions) over rolling your own (unless you specifically want to, in which case I suggest learning more about binary/bit manipulation from elsewhere before tackling this.) This kind of thing has been done to death and there are "good" solutions.
There are a number of tricks that will maybe only consume one bit: e.g. arrays of bitfields (applicable in C as well), but whether less space gets used is up to compiler. See this link.
Please note that whatever you do, you will almost surely never be able to use exactly N bits to store N bits of information - your computer very likely can't allocate less than 8 bits: if you want 7 bits you'll have to waste 1 bit, and if you want 9 you will have to take 16 bits and waste 7 of them. Even if your computer (CPU + RAM etc.) could "operate" on single bits, if you're running in an OS with malloc/new it would not be sane for your allocator to track data to such a small precision due to overhead. That last qualification was pretty silly - you won't find an architecture in use that allows you to operate on less than 8 bits at a time I imagine :)
You should use std::bitset.
std::bitset functions like an array of bool (actually like std::array, since it copies by value), but only uses 1 bit of storage for each element.
Another option is vector<bool>, which I don't recommend because:
It uses slower pointer indirection and heap memory to enable resizing, which you don't need.
That type is often maligned by standards-purists because it claims to be a standard container, but fails to adhere to the definition of a standard container*.
*For example, a standard-conforming function could expect &container.front() to produce a pointer to the first element of any container type, which fails with std::vector<bool>. Perhaps a nitpick for your usage case, but still worth knowing about.
There is in fact! std::vector<bool> has a specialization for this: http://en.cppreference.com/w/cpp/container/vector_bool
See the doc, it stores it as efficiently as possible.
Edit: as somebody else said, std::bitset is also available: http://en.cppreference.com/w/cpp/utility/bitset
If you want to write it in C, have an array of char that is 67601 bits in length (67601/8 = 8451) and then turn on/off the appropriate bit for each value.
Others have given the right idea. Here's my own implementation of a bitsarr, or 'array' of bits. An unsigned char is one byte, so it's essentially an array of unsigned chars that stores information in individual bits. I added the option of storing TWO or FOUR bit values in addition to ONE bit values, because those both divide 8 (the size of a byte), and would be useful if you want to store a huge number of integers that will range from 0-3 or 0-15.
When setting and getting, the math is done in the functions, so you can just give it an index as if it were a normal array--it knows where to look.
Also, it's the user's responsibility to not pass a value to set that's too large, or it will screw up other values. It could be modified so that overflow loops back around to 0, but that would just make it more convoluted, so I decided to trust myself.
#include<stdio.h>
#include <stdlib.h>
#define BYTE 8
typedef enum {ONE=1, TWO=2, FOUR=4} numbits;
typedef struct bitsarr{
unsigned char* buckets;
numbits n;
} bitsarr;
bitsarr new_bitsarr(int size, numbits n)
{
int b = sizeof(unsigned char)*BYTE;
int numbuckets = (size*n + b - 1)/b;
bitsarr ret;
ret.buckets = malloc(sizeof(ret.buckets)*numbuckets);
ret.n = n;
return ret;
}
void bitsarr_delete(bitsarr xp)
{
free(xp.buckets);
}
void bitsarr_set(bitsarr *xp, int index, int value)
{
int buckdex, innerdex;
buckdex = index/(BYTE/xp->n);
innerdex = index%(BYTE/xp->n);
xp->buckets[buckdex] = (value << innerdex*xp->n) | ((~(((1 << xp->n) - 1) << innerdex*xp->n)) & xp->buckets[buckdex]);
//longer version
/*unsigned int width, width_in_place, zeros, old, newbits, new;
width = (1 << xp->n) - 1;
width_in_place = width << innerdex*xp->n;
zeros = ~width_in_place;
old = xp->buckets[buckdex];
old = old & zeros;
newbits = value << innerdex*xp->n;
new = newbits | old;
xp->buckets[buckdex] = new; */
}
int bitsarr_get(bitsarr *xp, int index)
{
int buckdex, innerdex;
buckdex = index/(BYTE/xp->n);
innerdex = index%(BYTE/xp->n);
return ((((1 << xp->n) - 1) << innerdex*xp->n) & (xp->buckets[buckdex])) >> innerdex*xp->n;
//longer version
/*unsigned int width = (1 << xp->n) - 1;
unsigned int width_in_place = width << innerdex*xp->n;
unsigned int val = xp->buckets[buckdex];
unsigned int retshifted = width_in_place & val;
unsigned int ret = retshifted >> innerdex*xp->n;
return ret; */
}
int main()
{
bitsarr x = new_bitsarr(100, FOUR);
for(int i = 0; i<16; i++)
bitsarr_set(&x, i, i);
for(int i = 0; i<16; i++)
printf("%d\n", bitsarr_get(&x, i));
for(int i = 0; i<16; i++)
bitsarr_set(&x, i, 15-i);
for(int i = 0; i<16; i++)
printf("%d\n", bitsarr_get(&x, i));
bitsarr_delete(x);
}
I am looking for any library of example parsing a binary msg in C++. Most people asks for reading a binary file, or data received in a socket, but I just have a set of binary messages I need to decode. Somebody mentioned boost::spirit, but I haven't been able to find a suitable example for my needs.
As an example:
9A690C12E077033811FFDFFEF07F042C1CE0B704381E00B1FEFFF78004A92440
where first 8 bits are a preamble, next 6 bits the msg ID (an integer from 0 to 63), next 212 bits are data, and final 24 bits are a CRC24.
So in this case, msg 26, I have to get this data from the 212 data bits:
4 bits integer value
4 bits integer value
A 9 bit float value from 0 to 63.875, where LSB is 0.125
4 bits integer value
EDIT: I need to operate at bit level, so a memcpy is not a good solution, since it copies a number of bytes. To get first 4-bit integer value I should get 2 bits from a byte, and another 2 bits from the next byte, shift each pair and compose. What I am asking for is a more elegant way of extracting the values, because I have about 20 different messages and wanted to reach a common solution to parse them at bit level.
And so on.
Do you know os any library which can easily achieve this?
I also found other Q/A where static_cast is being used. I googled about it, and for each person recommending this approach, there is another one warning about endians. Since I already have my message, I don't know if such a warning applies to me, or is just for socket communications.
EDIT: boost:dynamic_bitset looks promising. Any help using it?
If you can't find a generic library to parse your data, use bitfields to get the data and memcpy() it into an variable of the struct. See the link Bitfields. This will be more streamlined towards your application.
Don't forget to pack the structure.
Example:
#pragma pack
include "order32.h"
struct yourfields{
#if O32_HOST_ORDER == O32_BIG_ENDIAN
unsigned int preamble:8;
unsigned int msgid:6;
unsigned data:212;
unsigned crc:24;
#else
unsigned crc:24;
unsigned data:212;
unsigned int msgid:6;
unsigned int preamble:8;
#endif
}/*__attribute__((packed)) for gcc*/;
You can do a little compile time check to assert if your machine uses LITTLE ENDIAN or BIG ENDIAN format. After that define it into a PREPROCESSOR SYMBOL::
//order32.h
#ifndef ORDER32_H
#define ORDER32_H
#include <limits.h>
#include <stdint.h>
#if CHAR_BIT != 8
#error "unsupported char size"
#endif
enum
{
O32_LITTLE_ENDIAN = 0x03020100ul,
O32_BIG_ENDIAN = 0x00010203ul,
O32_PDP_ENDIAN = 0x01000302ul
};
static const union { unsigned char bytes[4]; uint32_t value; } o32_host_order =
{ { 0, 1, 2, 3 } };
#define O32_HOST_ORDER (o32_host_order.value)
#endif
Thanks to code by Christoph # here
Example program for using bitfields and their outputs:
#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <memory.h>
using namespace std;
struct bitfields{
unsigned opcode:5;
unsigned info:3;
}__attribute__((packed));
struct bitfields opcodes;
/* info: 3bits; opcode: 5bits;*/
/* 001 10001 => 0x31*/
/* 010 10010 => 0x52*/
void set_data(unsigned char data)
{
memcpy(&opcodes,&data,sizeof(data));
}
void print_data()
{
cout << opcodes.opcode << ' ' << opcodes.info << endl;
}
int main(int argc, char *argv[])
{
set_data(0x31);
print_data(); //must print 17 1 on my little-endian machine
set_data(0x52);
print_data(); //must print 18 2
cout << sizeof(opcodes); //must print 1
return 0;
}
You can manipulate bits for your own, for example to parse 4 bit integer value do:
char[64] byte_data;
size_t readPos = 3; //any byte
int value = 0;
int bits_to_read = 4;
for (size_t i = 0; i < bits_to_read; ++i) {
value |= static_cast<unsigned char>(_data[readPos]) & ( 255 >> (7-i) );
}
Floats usually sent as string data:
std::string temp;
temp.assign(_data+readPos, 9);
flaot value = std::stof(temp);
If your data contains custom float format then just extract bits and do your math:
char[64] byte_data;
size_t readPos = 3; //any byte
float value = 0;
int i = 0;
int bits_to_read = 9;
while (bits_to_read) {
if (i > 8) {
++readPos;
i = 0;
}
const int bit = static_cast<unsigned char>(_data[readPos]) & ( 255 >> (7-i) );
//here your code
++i;
--bits_to_read;
}
Here is a good article that describes several solutions to the problem.
It even contains the reference to the ibstream class that the author created specifically for this purpose (the link seems dead, though). The only other mention of this class I could find is in the bit C++ library here - it might be what you need, though it's not popular and it's under GPL.
Anyway, the boost::dynamic_bitset might be the best choice as it's time-tested and community-proven. But I have no personal experience with it.
I've encountered some strange behaviour when trying to promote a short to an int where the upper 2 bytes are 0xFFFF after promotion. AFAIK the upper bytes should always remain 0. See the following code:
unsigned int test1 = proxy0->m_collisionFilterGroup;
unsigned int test2 = proxy0->m_collisionFilterMask;
unsigned int test3 = proxy1->m_collisionFilterGroup;
unsigned int test4 = proxy1->m_collisionFilterMask;
if( test1 & 0xFFFF0000 || test2 & 0xFFFF0000 || test3 & 0xFFFF0000 || test4 & 0xFFFF0000 )
{
std::cout << "test";
}
The values of the involved variables is once cout is hit is:
Note the two highlighted values. I also looked at the disassembly which also looks fine to me:
My software is targeting x64 compiled with VS 2008 SP1. I also link in an out of the box version of Bullet Physics 2.80. The proxy objects are bullet objects.
The proxy class definition is as follows (with some functions trimmed out):
///The btBroadphaseProxy is the main class that can be used with the Bullet broadphases.
///It stores collision shape type information, collision filter information and a client object, typically a btCollisionObject or btRigidBody.
ATTRIBUTE_ALIGNED16(struct) btBroadphaseProxy
{
BT_DECLARE_ALIGNED_ALLOCATOR();
///optional filtering to cull potential collisions
enum CollisionFilterGroups
{
DefaultFilter = 1,
StaticFilter = 2,
KinematicFilter = 4,
DebrisFilter = 8,
SensorTrigger = 16,
CharacterFilter = 32,
AllFilter = -1 //all bits sets: DefaultFilter | StaticFilter | KinematicFilter | DebrisFilter | SensorTrigger
};
//Usually the client btCollisionObject or Rigidbody class
void* m_clientObject;
short int m_collisionFilterGroup;
short int m_collisionFilterMask;
void* m_multiSapParentProxy;
int m_uniqueId;//m_uniqueId is introduced for paircache. could get rid of this, by calculating the address offset etc.
btVector3 m_aabbMin;
btVector3 m_aabbMax;
SIMD_FORCE_INLINE int getUid() const
{
return m_uniqueId;
}
//used for memory pools
btBroadphaseProxy() :m_clientObject(0),m_multiSapParentProxy(0)
{
}
btBroadphaseProxy(const btVector3& aabbMin,const btVector3& aabbMax,void* userPtr,short int collisionFilterGroup, short int collisionFilterMask,void* multiSapParentProxy=0)
:m_clientObject(userPtr),
m_collisionFilterGroup(collisionFilterGroup),
m_collisionFilterMask(collisionFilterMask),
m_aabbMin(aabbMin),
m_aabbMax(aabbMax)
{
m_multiSapParentProxy = multiSapParentProxy;
}
}
;
I've never had this issue before and only started getting it after upgrading to 64 bit and integrating bullet. The only place I am getting issues is where bullet is involved so I suspect the issue is related to that somehow, but I am still super confused about what could make assignments between primitive types not behave as expected.
Thanks
You are requesting a conversion from signed to unsigned. This is pretty straigth-forward:
Your source value is -1. Since the type is short int, on your platform that has bits 0xFFFF.
The target type is unsigned int. -1 cannot be expressed as an unsigned int, but the conversion rule is defined by the standard: Pick the positive value that's congruent to -1 modulo 2N, where N is the number of value bits of the unsigned type.
On your platform, unsigned int has 32 value bits, so the modular representative of -1 modulo 232 is 0xFFFFFFFF.
If your own imaginary rules where to apply, you would want the result 0x0000FFFF, which is 65535, and not related to −1 in any obvious or useful way.
If you do want that conversion, you must perform the modular wrap-around on the short type manually:
short int mo = -1;
unsigned int weird = static_cast<unsigned short int>(mo);
Nutshell: C++ is about values, not about representations.
AFAIK the upper bytes should always remain 0
When promoting from short to int arithmetic shift (also called signed shift) is used,
to answer you question it`s enough to know that it is performed by extension of greatest byte value on number of added bytes;
example:
short b;
int a = b; /* here promotion is performed, mechanism of it can be described by following bitwise operation: */
a = b >> (sizeof(a) - sizeof(b)); // arithmetic shift performed
important to notice that in memory of computer representation of signed and unsigned values can be the same, the only difference in commands generated by compiler:
example:
unsigned short i = -1 // 0xffff
short j = 65535 // 0xffff
so actually signed/unsigned doesn`t matter for result on promotion, arithmetic shift is performed in both cases