Split and casting address into different integers in Ada - casting

To interface with a certain piece of hardware (in this case a TSS entry of an x86 GDT), it is required to use the following structure in memory:
type UInt32 is mod 2 ** 32;
type UInt16 is mod 2 ** 16;
type UInt8 is mod 2 ** 8;
type TSSEntry is record
Limit : UInt16;
BaseLow16 : UInt16;
BaseMid8 : UInt8;
Flags1 : UInt8;
Flags2 : UInt8;
BaseHigh8 : UInt8;
BaseUpper32 : UInt32;
Reserved : UInt32;
end record;
for TSSEntry use record
Limit at 0 range 0 .. 15;
BaseLow16 at 0 range 16 .. 31;
BaseMid8 at 0 range 32 .. 39;
Flags1 at 0 range 40 .. 47;
Flags2 at 0 range 48 .. 55;
BaseHigh8 at 0 range 56 .. 63;
BaseUpper32 at 0 range 64 .. 95;
Reserved at 0 range 96 .. 127;
end record;
for TSSEntry'Size use 128;
When translating some C code into Ada, I ran into several issues, and I could not find many resources online. the C snippet is:
TSSEntry tss;
void loadTSS(size_t address) {
tss.baseLow16 = (uint16_t)address;
tss.baseMid8 = (uint8_t)(address >> 16);
tss.flags1 = 0b10001001;
tss.flags2 = 0;
tss.baseHigh8 = (uint8_t)(address >> 24);
tss.baseUpper32 = (uint32_t)(address >> 32);
tss.reserved = 0;
}
This is the Ada code I tried to translate it to:
TSS : TSSEntry;
procedure loadTSS (Address : System.Address) is
begin
TSS.BaseLow16 := Address; -- How would I downcast this to fit in the 16 lower bits?
TSS.BaseMid8 := Shift_Right(Address, 16); -- Bitwise ops dont take System.Address + downcast
TSS.Flags1 := 2#10001001#;
TSS.Flags2 := 0;
TSS.BaseHigh8 := Shift_Right(Address, 24); -- Same as above
TSS.BaseUpper32 := Shift_Right(Address, 32); -- Same as above
TSS.Reserved := 0;
end loadTSS;
How would I be able to show the issues I highlighted in the code? Are there any resources a beginner can use for help in cases likes this? Thanks in advance!

Use the To_Integer function in the package System.Storage_Elements to convert the address into an integer, then convert that integer to Interfaces.Unsigned_32 or Unsigned_64 (whichever is appropriate) so that you can use the shift operations to extract bit-fields.
Instead of the shift and mask operations, you can of course use division and "mod" to pick the integer apart, without converting to the Interfaces types.

Related

How does this Union and Bit field interaction work?

So here is an example:
struct field
{
unsigned int a : 8;
unsigned int b : 8;
unsigned int c : 8;
unsigned int d : 8;
};
union test
{
unsigned int raw;
field bits;
};
int main()
{
test aUnion;
aUnion.raw = 0xabcdef;
printf("a: %x \n", aUnion.bits.a);
printf("b: %x \n", aUnion.bits.b);
printf("c: %x \n", aUnion.bits.c);
printf("d: %x \n", aUnion.bits.d);
return 0;
}
now running this I get:
a: ef
b: cd
c: ab
d: 0
And I guess I just dont really get whats happening here. So I set raw to a value, and since this is a union, everything else pulls from that since they have all been set to be smaller than an unsigned int? so the bit field is based on raw? but how does that map out? why is d: 0 in this instance?
I would appreciate any help here.
Using hexadecimal representation of an integer is useful because it makes clear what is the value of every byte of the integer. So the setting
aUnion.raw = 0xabcdef;
means that the value of least significant byte is 0xef, that the second least significant byte has value 0xcd and so on. But you are setting the raw field of the union, that is an integer so it is 4 bytes long. In the previous representation the most significant byte is missing, so it can be written as
aUnion.raw = 0x00abcdef;
(it is like making explicit that an integer x = 42 has 0 hundreds, 0 thousands and so on).
Your union fields represent respectively a =byte[0], b = byte[1], c = byte[2] and d = byte[3] of the integer raw, since in a union all the elements share the same memory location. This is true because you are running your code in a little endian architecture (least significant bytes come first).
So:
a = byte[0] of raw = 0xef
b = byte[1] of raw = 0xcd
c = byte[2] of raw = 0xab
d = byte[3] of raw = 0x00
Its because your unsigned int isn't 32 bit long enough (all 32 bits not set) to completely fill all the bit field values. Because it only 24 bits long, the bit field d is showing hex value of 00 . Try it for e.g.
aUnion.raw = 0xffabcdef;
which will produce
a: ef
b: cd
c: ab
d: ff
Since the dd bit field occupies bits 24-32 (on little endian), unless the assigned unsigned int field has been assigned a value that occupies those bits set, that bit field position doesn't show the value too.

not able to shift hex data in a unsigned long

i am trying to convert IEEE 754 Floating Point Representation to its Decimal Equivalent so i have an example data [7E FF 01 46 4B CD CC CC CC CC CC 10 40 1B 7E] which is in hex.
char strResponseData[STATUS_BUFFERSIZE]={0};
unsigned long strData = (((strResponseData[12] & 0xFF)<< 512 ) |((strResponseData[11] & 0xFF) << 256) |((strResponseData[10] & 0xFF)<< 128 ) |((strResponseData[9] & 0xFF)<< 64) |((strResponseData[8] & 0xFF)<< 32 ) |((strResponseData[7]& 0xFF) << 16) |((strResponseData[6] & 0xFF )<< 8) |(strResponseData[5] & 0xFF));
value = IEEEHexToDec(strData,1);
then i am passing this value to this function
IEEEHexToDec(unsigned long number, int isDoublePrecision)
{
int mantissaShift = isDoublePrecision ? 52 : 23;
unsigned long exponentMask = isDoublePrecision ? 0x7FF0000000000000 : 0x7f800000;
int bias = isDoublePrecision ? 1023 : 127;
int signShift = isDoublePrecision ? 63 : 31;
int sign = (number >> signShift) & 0x01;
int exponent = ((number & exponentMask) >> mantissaShift) - bias;
int power = -1;
double total = 0.0;
for ( int i = 0; i < mantissaShift; i++ )
{
int calc = (number >> (mantissaShift-i-1)) & 0x01;
total += calc * pow(2.0, power);
power--;
}
double value = (sign ? -1 : 1) * pow(2.0, exponent) * (total + 1.0);
return value;
}
but in return am getting value 0, also when am trying to print strData it is giving me only CCCCCD.
i am using eclipse ide.
please i need some suggestion
((strResponseData[12] & 0xFF)<< 512 )
First, the << operator takes a number of bits to shift, you seem to be confusing it with multiplication by the resulting power of two - while it has the same effect, you need to supply the exponent. Given that you have no typical data types of 512 bit width, it's fairly certain that this should actually be.
((strResponseData[12] & 0xFF)<< 9 )
Next, it's necessary for the value to be shifted to be of a sufficient type to hold the result before you do the shift. A char is obviously not sufficient, so you need to explicitly cast the value to a sufficient type to hold the result before you perform the shift.
Additionally keep in mind that depending on your platform an unsigned long may be either a 32 bit or 64 bit type, so if you were doing an operation with a bit shift where the result would not fit in 32 bits, you may want to use an unsigned long long or better yet make things unambiguous, for example with #include <stdint.h> and type such as uint32_t or uint64_t. Given that your question is tagged "embedded" this is especially important to keep in mind as you might be targeting a 32 (or even 8) bit processor, but sometimes building algorithms to test on the development machine instead.
Further, a char can be either a signed or an unsigned type. Before shifting, you should make that explicit. Given that you are combining multiple pieces of something, it is almost certain that at least most of these should be treated as unsigned.
So probably you want something like
((uint32_t)(strResponseData[12] & 0xFF)<< 9 )
Unless you are on an odd platform where char is not 8 bits (for example some TI DSP's) you probably don't need to pre-mask with 0xff, but it's not hurting anything
Finally it is not 100% clear what you are staring with:
i have an example data [7E FF 01 46 4B CD CC CC CC CC CC 10 40 1B 7E] which is in hex.
Is ambiguous as it is not clear if you mean
[0x7e, 0xff, 0x01, 0x46...]
Which would be an array of byte values which debugging code has printed out in hex for human convenience, or if you actually mean that you something such as
"[7E FF 01 46 .... ]"
Which string of text containing a human readable representation of hex digits as printable characters. In the latter case, you'd first have to convert the character representation of hex digits or octets into into numeric values.

How to random flip binary bit of char in C/C++

If I have a char array A, I use it to store hex
A = "0A F5 6D 02" size=11
The binary representation of this char array is:
00001010 11110101 01101101 00000010
I want to ask is there any function can random flip the bit?
That is:
if the parameter is 5
00001010 11110101 01101101 00000010
-->
10001110 11110001 01101001 00100010
it will random choose 5 bit to flip.
I am trying make this hex data to binary data and use bitmask method to achieve my requirement. Then turn it back to hex. I am curious is there any method to do this job more quickly?
Sorry, my question description is not clear enough. In simply, I have some hex data, and I want to simulate bit error in these data. For example, if I have 5 byte hex data:
"FF00FF00FF"
binary representation is
"1111111100000000111111110000000011111111"
If the bit error rate is 10%. Then I want to make these 40 bits have 4 bits error. One extreme random result: error happened in the first 4 bit:
"0000111100000000111111110000000011111111"
First of all, find out which char the bit represents:
param is your bit to flip...
char *byteToWrite = &A[sizeof(A) - (param / 8) - 1];
So that will give you a pointer to the char at that array offset (-1 for 0 array offset vs size)
Then get modulus (or more bit shifting if you're feeling adventurous) to find out which bit in here to flip:
*byteToWrite ^= (1u << param % 8);
So that should result for a param of 5 for the byte at A[10] to have its 5th bit toggled.
store the values of 2^n in an array
generate a random number seed
loop through x times (in this case 5) and go data ^= stored_values[random_num]
Alternatively to storing the 2^n values in an array, you could do some bit shifting to a random power of 2 like:
data ^= (1<<random%7)
Reflecting the first comment, you really could just write out that line 5 times in your function and avoid the overhead of a for loop entirely.
You have 32 bit number. You can treate the bits as parts of hte number and just xor this number with some random 5-bits-on number.
int count_1s(int )
{
int m = 0x55555555;
int r = (foo&m) + ((foo>>>1)&m);
m = 0x33333333;
r = (r&m) + ((r>>>2)&m);
m = 0x0F0F0F0F;
r = (r&m) + ((r>>>4)&m);
m = 0x00FF00FF;
r = (r&m) + ((r>>>8)&m);
m = 0x0000FFFF;
return r = (r&m) + ((r>>>16)&m);
}
void main()
{
char input[] = "0A F5 6D 02";
char data[4] = {};
scanf("%2x %2x %2x %2x", &data[0], &data[1], &data[2], &data[3]);
int *x = reinterpret_cast<int*>(data);
int y = rand();
while(count_1s(y) != 5)
{
y = rand(); // let's have this more random
}
*x ^= y;
printf("%2x %2x %2x %2x" data[0], data[1], data[2], data[3]);
return 0;
}
I see no reason to convert the entire string back and forth from and to hex notation. Just pick a random character out of the hex string, convert this to a digit, change it a bit, convert back to hex character.
In plain C:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main (void)
{
char *hexToDec_lookup = "0123456789ABCDEF";
char hexstr[] = "0A F5 6D 02";
/* 0. make sure we're fairly random */
srand(time(0));
/* 1. loop 5 times .. */
int i;
for (i=0; i<5; i++)
{
/* 2. pick a random hex digit
we know it's one out of 8, grouped per 2 */
int hexdigit = rand() & 7;
hexdigit += (hexdigit>>1);
/* 3. convert the digit to binary */
int hexvalue = hexstr[hexdigit] > '9' ? hexstr[hexdigit] - 'A'+10 : hexstr[hexdigit]-'0';
/* 4. flip a random bit */
hexvalue ^= 1 << (rand() & 3);
/* 5. write it back into position */
hexstr[hexdigit] = hexToDec_lookup[hexvalue];
printf ("[%s]\n", hexstr);
}
return 0;
}
It might even be possible to omit the convert-to-and-from-ASCII steps -- flip a bit in the character string, check if it's still a valid hex digit and if necessary, adjust.
First randomly chose x positions (each position consist of array index and the bit position).
Now if you want to flip ith bit from right for a number n. Find the remainder of n by 2n as :
code:
int divisor = (2,i);
int remainder = n % divisor;
int quotient = n / divisor;
remainder = (remainder == 0) ? 1 : 0; // flip the remainder or the i th bit from right.
n = divisor * quotient + remainder;
Take mod 8 of input(5%8)
Shift 0x80 to right by input value (e.g 5)
XOR this value with (input/8)th element of your character array.
code:
void flip_bit(int bit)
{
Array[bit/8] ^= (0x80>>(bit%8));
}

Function for decoding unsigned short value

i have small problem with some task.
We conduct a survey on the subject. Result of a single survey (obtained from one respondent) provides the following information to be encoded in a variable of type unsigned short (it can be assumed that it is 2 bytes - 16 bits)
sex - 1 bit - 2 possibilities
marital status - 2 bits - 4 possibilities
Age - 2 bits - 4 possibilities
Education - 2 bits - 4 possibilities
City - 2 bits - 4 possibilities
region - 4 bits - 16 possibilities
answer - 3 bits - 8 possibilities
unsigned short coding(int sex, int marital_status, int age, int edu, int city, int region, int reply){
unsigned short result = 0;
result = result + sex;
result = result + ( marital_status << 1 );
result = result + ( age << 3);
result = result + ( edu << 5 );
result = result + ( city << 6 );
result = result + ( region << 11 );
result = result + ( reply << 13 );
return result;
}
Here it encodes the results (hope its correct), but I have no idea how to prepare function which will display informations, which i have encoded inside of unsigned short x.
First I have to encode it:
unsigned short x = coding(0, 3, 2, 3, 0, 12, 6);
then i need to prepare another function, which will decode informations from unsigned short x into this form:
info(x);
RESULT
sex: 0
martial status: 3
age: 2
education: 3
city: 0
region: 12
reply: 6
I will be grateful for your help, because I have no idea how to even get started and what to look for.
My question is if someone can check unsigned short coding function and help with with writing void info(unsigned short x).
you can use bit fields
struct survey_data
{
unsigned short sex : 1;
unsigned short marital_status : 2;
unsigned short age : 2;
unsigned short education : 2;
unsigned short city : 2;
unsigned short region : 4;
unsigned short answer : 3;
};
if you need to convert it between short, you can define a union like this
union survey
{
struct survey_data detail;
unsigned short s;
};
to use these types
struct survey_data sd;
sd.sex = 0;
sd.marital_status = 2;
...
unsigned short s = 0xCAFE;
union servey x;
x.s = s;
printf("Sex: %u, Age: %u", x.detail.sex, x.detail.age);
keep in mind the layout of bit fields is implementation defined; different compiler may interpret them in different order, e.g. in MSVC, it is lsb to msb; pelase refer to the compiler manual and c/c++ standard for details.
The solution is straightforward, and it's mostly text work. Transfer your data description
sex - 1 bit - 2 possibilities
marital status - 2 bits - 4 possibilities
Age - 2 bits - 4 possibilities
Education - 2 bits - 4 possibilities
City - 2 bits - 4 possibilities
region - 4 bits - 16 possibilities
answer - 3 bits - 8 possibilities
into this C/C++ structure:
struct Data {
unsigned sex: 1; // 2 possibilities
unsigned marital: 2; // 4 possibilities
unsigned Age: 2; // 4 possibilities
unsigned Education: 2; // 4 possibilities
unsigned City: 2; // 4 possibilities
unsigned region: 4; // 16 possibilities
unsigned answer: 3; // 8 possibilities
};
It's a standard use case for bit sets, which is even traditional C, but also available in each standard-conform C++ implementation.
Let's name your 16-bit encoded data type for storage store_t (from the several definitions in use we use the C standard header stdint.h):
#include <stdint.h>
typedef uint16_t store_t;
The example Data structure can be used for encoding:
/// create a compact storage type value from data
store_t encodeData(const Data& data) {
return *reinterpret_cast<const store_t*>(&data);
}
or decoding your data set:
/// retrieve data from a compact storage type
const Data decodeData(const store_t code) {
return *reinterpret_cast<const Data*>(&code);
}
you access the bitset structure Data like an ordinary structure:
Data data;
data.sex = 1;
data.marital = 0;

c/c++ how to convert short to char

I am using ms c++. I am using struct like
struct header {
unsigned port : 16;
unsigned destport : 16;
unsigned not_used : 7;
unsigned packet_length : 9;
};
struct header HR;
here this value of header i need to put in separate char array.
i did memcpy(&REQUEST[0], &HR, sizeof(HR));
but value of packet_length is not appearing properly.
like if i am assigning HR.packet_length = 31;
i am getting -128(at fifth byte) and 15(at sixth byte).
if you can help me with this or if their is more elegant way to do this.
thanks
Sounds like the expected behaviour with your struct as you defined packet_length to be 9 bits long. So the lowest bit of its value is already within the fifth byte of the memory. Thus the value -128 you see there (as the highest bit of 1 in a signed char is interpreted as a negative value), and the value 15 is what is left in the 6th byte.
The memory bits look like this (in reverse order, i.e. higher to lower bits):
byte 6 | byte 5 | ...
0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0
packet_length | not_used | ...
Note also that this approach may not be portable, as the byte order inside multibyte variables is platform dependent (see endianness).
Update: I am not an expert in cross-platform development, neither did you tell much details about the layout of your request etc. Anyway, in this situation I would try to set the fields of the request individually instead of memcopying the struct into it. That way I could at least control the exact values of each individual field.
struct header {
unsigned port : 16;
unsigned destport : 16;
unsigned not_used : 7;
unsigned packet_length : 9;
};
int main(){
struct header HR = {.packet_length = 31};
printf("%u\n", HR.packet_length);
}
$ gcc new.c && ./a.out
31
Update:
i now that i can print that value directly by using attribute in struct. But i need to send this struct on network and their i am using java.
In that case, use an array of chars (length 16+16+7+9) and parse on the other side using java.
Size of array will be less than struct, and more packing could be possible in a single MTU.