Storing data on device - c++

I have not many experience with operations/storage of binary data so I would greatly appreciate if someone could clarify some things for me.
I have a device say where you have to store 16 bytes. e.g., you should send it an array of bytes proceeded probably with header information. e.g., smth like this:
unsigned char sendBuffer[255];
sendBuffer[0] = headerInfo1;
sendBuffer[1] = headerInfo1;
sendBuffer[2] = headerInfo1;
sendBuffer[3] = headerInfo1;
sendBuffer[4] = data;
sendBuffer[5] = data;
sendBuffer[6] = data;
sendBuffer[7] = data;
sendBuffer[8] = data;
...
sendBuffer[20] = data;
Let's say send operation is easy, you just use Send(sendBuffer, length).
My question is say I want to store an integer in the device - what is the best way to do this?
I have a sample code which does it and I was not sure if it was ok and how it was doing it. It confused me too. I basically enter the number I want to store in text box. Say I want to store 105 in decimal. I enter "00000000000000000000000000000105" (I am not sure how program interprets this yet, as decimal or as hex), then there is this code:
for(int i=0,m=0; i < size; i+=2,m++)
{
char ch1, ch2;
ch1=(char)str[i]; // str contains the number I entered above as string, padded
ch2=(char)str[i+1];
int dig1, dig2;
if(isdigit(ch1)) dig1 = ch1 - '0';
else if(ch1>='A' && ch1<='F') dig1 = ch1 - 'A' + 10;
else if(ch1>='a' && ch1<='f') dig1 = ch1 - 'a' + 10;
if(isdigit(ch2)) dig2 = ch2 - '0';
else if(ch2>='A' && ch2<='F') dig2 = ch2 - 'A' + 10;
else if(ch2>='a' && ch2<='f') dig2 = ch2 - 'a' + 10;
// Contains data to write as a byte array; this is basically the 'data' part as mentioned in my above snippet
array1[m] = (char)(dig1*16 + dig2);
}
And this array1[m] is written to the device using Send as above. But when I debug array1 contains: 0000000000000015
When I do the read the value I get is correct, it is 00000000000000000000000000000105. How come this works?

You're reinventig a few wheels here, but that's to be expected if you're new to C++.
std::cin >> yourInteger will read an integer, no need to convert that yourself.
Leading zeroes are usually not written out, but in a C++ integer type they're always present. E.g. int32_t always has 32 bits. If it stores 105 (0x69), it really stores 0x00000069.
So, the best way is probably to memcpy that integer to your sendBuffer. You should copy sizeof(yourInteger) bytes.

Seems there are a few questions hiding in here, so some extra answers:
You say that array1 contains: 0000000000000015, not 105.
Well, it's an array, and each member is shown as an 8 bits integer in its own right.
E.g. the last value is 5 or 05, that's the same after all. Similarly, the penultimate integer is 1 or 01.
You also wrote "Say I want to store 105 in decimal. I enter 00000000000000000000000000000105". That doesn't actually store 105 decimal. It stores 105 hexadecimal, which is 261 decimal. It is the string to integer conversion which determines the final value. If you would use base 18 (octodecimal), the string "105" becomes the integer 1*18*18 + 0 + 5 = 329 (decimal), and that would be stored as 000000101001001 binary.

Related

what does 48 and 55 mean in hex? I found a code that converts from decimal to hex that I posted down below

// check if temp < 10
if (temp < 10) {
hexaDeciNum[i] = temp + 48;
i++;
}
else {
hexaDeciNum[i] = temp + 55;
i++;
}
n = n / 16;
}
I found this code to convert from decimal to hex but as you can see we have + 48 and + 55 anyone know why did we use these numbers? btw temp is to store the remainder... thanks!
What the code is doing, badly, is converting a value in the range of 0 to 15 into a corresponding character for the hexadecimal representation of that value. The right way to do that is with a lookup table:
const char hex[] = "0123456789ABCDEF";
hexaDeciNum[i] = hex[temp];
One problem with the code as written is that it assumes, without saying so, that you want the output characters encoded in ASCII. That's almost always the case, but there is no need to make that assumption. The compiler knows what encoding the system that it is targeting uses, so the values in the array hex in my code will be correct for the target system, even if it doesn't use ASCII.
Another problem with the code as written is the magic numbers. They don't tell you what their purpose is. To get rid of the magic numbers, replace 48 with '0' and replace 55 with 'A' - 10. But see the previous paragraph.
In C and C++ you can convert a base-10 digit to its corresponding character by adding it to '0', so
hexaDeciNum[i] = digit + '0';
will work correctly. There is no such requirement for any other values, so that conversion to a letter is not guaranteed to work, even if you use 'A' instead of that hardcoded 65.
And don't get me started on pointless comments:
// check if temp < 10
if (temp < 10)
If you look on the ASCII table you will see that the characters for numbers 0..9 are shifted by 48. So, if you take a number e.g. 0 and add 48 to it you will get a character for that number "0".
The same goes for characters if you take number 10 and add 55 to it you will get an "A" character from the ASCII table.

Appending Digits to end of Int After Converting from Base 95

I am trying to convert a sting from base 95 to an int. For each character in the string I am subtracting by 32 to convert it from base 95 to an int (more details and a conversion table I am using for reference can be found here). And after converting a character. I want to then append it to the end of an int variable. Here is my code thus far:
string name = "!#N$";
int name2;
for(int i = 0; i < name.length(); i++)
{
name2 = name.at(i) - 32;
}
I want to append the value of name2 to the end of the previous value of name2 so by the end of the loop I have one int. However, I am uncertain of just how to accomplish this. I was considering using the += operator, however that would just add the values up. Likewise I understand that if you * 10 and then add a digit the digit is appended (like this), but I am uncertain of how to implement this solution into my code. Any ideas on how I can accomplish this?
name2 = 0;
for (int i = 0; i < 4; ++i) {
int digit = (unsigned char)(name.at(i) - 32); // avoid sign extension
name2 |= digit << (3 - i) * 8;
}
On a side note, what's with all this base 95 business? I don't really see how it could be useful. It's not like it's saving any storage space, and it's basically impossible to read. I also don't know about that conversion function either. I don't see how a byte-wise subtraction by 32 is going to work.

char* and char[] not getting same output

I have taken the function argument as char*, in my case I get XOR equal to 210. On the other side, In the other VS I took the first argument not char* but simply char[] and the XOR is 114, which is correct.
what's wrong? why can't I get the same thing here?
thanks for your replies.
UPDATE: You are right. sprintf() works fine. The problem is the rest of the code.
bool BuildAnglePacket(char* WR_PacketAZAngle, float AZAngle)
{
WR_PacketAZAngle[0] = 0x04;
WR_PacketAZAngle[1] = 0x30;
WR_PacketAZAngle[2] = 0x31;
WR_PacketAZAngle[3] = 0x02;
WR_PacketAZAngle[4] = 0x79;
WR_PacketAZAngle[5] = 0x4E;
WR_PacketAZAngle[6] = 0x48;
int XOR;
char HAnlge[9];
int iAzimuthAngle;
// AZAngle = 22;
if (AZAngle >= -22.5 && AZAngle <= 22.5)
{
iAzimuthAngle = AZAngle*10;
if( AZAngle < 0)
{
iAzimuthAngle= abs(iAzimuthAngle);
iAzimuthAngle=((~iAzimuthAngle)&0xFFFF) + 1 ;
}
iAzimuthAngle = 65536 + iAzimuthAngle;
sprintf(HAnlge,"%08X", iAzimuthAngle);
WR_PacketAZAngle[7] = HAnlge[0];
WR_PacketAZAngle[8] = HAnlge[1];
WR_PacketAZAngle[9] = HAnlge[2];
WR_PacketAZAngle[10] = HAnlge[3];
WR_PacketAZAngle[11] = HAnlge[4];
WR_PacketAZAngle[12] = HAnlge[5];
WR_PacketAZAngle[13] = HAnlge[6];
WR_PacketAZAngle[14] = HAnlge[7];
WR_PacketAZAngle[15] = 0x03;
for(int i=4;i<16;i++)
XOR ^= WR_PacketAZAngle[i];
WR_PacketAZAngle[16] = XOR;
WR_PacketAZAngle[17] ='\x0';
}
return true;
}
Solved: Yes, I forgot to initialize XOR.
Your problem is not with sprintf_s or sprintf. Value 65536 + 150 => 65686 => 0x10096.
This is the correct result as printed by your code, anything else would be a bug. BTW, I think you meant 150, not 15, as 0x96 => 150.
can it be that iAngle in your Windows version is unsigned short, so it wraps around and you actually get 150 instead of 65536+150? This would explain output of '00000096' but it means that it is a bug in the original calculation code, not with printing itself.
BTW, I assume that real code doesn't have char HAngle; but something like char HAngle[..]; otherwise anything can happen in case that compiler for some reason falls asleep and doesn't produce an error.
EDIT:
The updated code shows that XOR is not initialized and it can contain anything before it is used in calculation, so you can get any result. You have to set it to 0 first. On Windows side it probably worked if you were testing the debug version which sets integral variables to 0 or by pure chance.
The original question doesn't make much sense: the code won't compile unless you change HAngle to an array (or pointer to an array); hex 96 is decimal 150, not 15; and you get the extra 1 because you're adding 65536 (hex 10000) to the value.
In the updated question, you get an indeterminate value for XOR (and technically undefined behaviour) since you never initialise it. You want:
int XOR = 0;
^^^
int iAngle = 15; // for example
char HAngle;
iAngle = 65536 + iAngle;
So iAngle == 65536 + 15 == 65551 which in hex is 0x0001000F. If you printed that to a string, shouldn't you get the following?
[0] 48 '0'
[1] 48 '0'
[2] 48 '0'
[3] 49 '1'
[4] 48 '0'
[5] 48 '0'
[6] 48 '0'
[7] 70 'F'
[8] 0 '\0'
Surely index 3 must always be a hex 1 in this case?
It looks like the windows function is doing something strange...
Also HAngle should be an array if you're going to print to it, otherwise you're way overflowing the storage allocated to it. You seem to be able to treat it as a pointer when you call sprintf so I'm assuming you mean char *HAngle and have allocated memory for the buffer before you print to it?
EDIT: From your updated code it looks like XOR is not initialised? If it isn't then it could start with any random value (compiler doesn't have to set it to zero I'm afraid:) ). This could account for the different results. On both sides it could have an arbitrary initial value and just so happens on one side to have a zero initial value...

c++ convert character to integer using ASCII

I am working on a little c++ project that receives a char array input from the user. Depending on the value, I am converting it to an int. I understand there are better ways of doing this but I thought I'd try to convert it through ASCII to allow other uses later on. My current code for the conversion is:-
int ctoi(char *item){
int ascii, num = 0;
ascii = static_cast<int>(item[0]);
if(ascii >= 49 && ascii <=57){
num = ascii - 48;
}else{
return 0;
}
ascii = static_cast<int>(item[1]);
if(ascii >= 48 && ascii <=57){
num = num * 10;
num = num + (ascii - 48);
}else{
return 0;
}
return num;
}
It receives a input into the char array item[2] in the main function and passes this to the conversion function above. The function converts the first char to ASCII then the decimal value of the ASCII to num if its between 1 and 9, then it converts the second char to ASCII, if it is between 0 and 9, it times the value in num by 10 (move along one unit) and adds the decimal value of the ASCII value. At any point it may fail, it returns the value 0 instead.
When I cout the function after receiving a value and run this code in a console, it works fine for single digit numbers (1 - 9), however when I try to use a double digit number, it repeats digits such as for 23, it will output 2233.
Thanks for any help.
I wonder how you're reading the input into a two-character array. Note that it's customary to terminate such strings with a null character, which leaves just one for the actual input. In order to read a string in C++, use this code:
std::string s;
std::cin >> s;
Alternatively, for a whole line, use this:
std::string line;
getline(std::cin, line);
In any case, these are basics explained in any C++ text. Go and read one, it's inevitable!

convert char[] of hexadecimal numbers to char[] of letters corresponding to the hexadecimal numbers in ascii table and reversing it

I have a char a[] of hexadecimal characters like this:
"315c4eeaa8b5f8aaf9174145bf43e1784b8fa00dc71d885a804e5ee9fa40b16349c146fb778cdf2d3aff021dfff5b403b510d0d0455468aeb98622b137dae857553ccd8883a7bc37520e06e515d22c954eba5025b8cc57ee59418ce7dc6bc41556bdb36bbca3e8774301fbcaa3b83b220809560987815f65286764703de0f3d524400a19b159610b11ef3e"
I want to convert it to letters corresponding to each hexadecimal number like this:
68656c6c6f = hello
and store it in char b[] and then do the reverse
I don't want a block of code please, I want explanation and what libraries was used and how to use it.
Thanks
Assuming you are talking about ASCII codes. Well, first step is to find the size of b. Assuming you have all characters by 2 hexadecimal digits (for example, a tab would be 09), then size of b is simply strlen(a) / 2 + 1.
That done, you need to go through letters of a, 2 by 2, convert them to their integer value and store it as a string. Written as a formula you have:
b[i] = (to_digit(a[2*i]) << 4) + to_digit(a[2*i+1]))
where to_digit(x) converts '0'-'9' to 0-9 and 'a'-'z' or 'A'-'Z' to 10-15.
Note that if characters below 0x10 are shown with only one character (the only one I can think of is tab, then instead of using 2*i as index to a, you should keep a next_index in your loop which is either added by 2, if a[next_index] < '8' or added by 1 otherwise. In the later case, b[i] = to_digit(a[next_index]).
The reverse of this operation is very similar. Each character b[i] is written as:
a[2*i] = to_char(b[i] >> 4)
a[2*i+1] = to_char(b[i] & 0xf)
where to_char is the opposite of to_digit.
Converting the hexadecimal string to a character string can be done by using std::substr to get the next two characters of the hex string, then using std::stoi to convert the substring to an integer. This can be casted to a character that is added to a std::string. The std::stoi function is C++11 only, and if you don't have it you can use e.g. std::strtol.
To do the opposite you loop over each character in the input string, cast it to an integer and put it in an std::ostringstream preceded by manipulators to have it presented as a two-digit, zero-prefixed hexadecimal number. Append to the output string.
Use std::string::c_str to get an old-style C char pointer if needed.
No external library, only using the C++ standard library.
Forward:
Read two hex chars from input.
Convert to int (0..255). (hint: sscanf is one way)
Append int to output char array
Repeat 1-3 until out of chars.
Null terminate the array
Reverse:
Read single char from array
Convert to 2 hexidecimal chars (hint: sprintf is one way).
Concat buffer from (2) to final output string buffer.
Repeat 1-3 until out of chars.
Almost forgot to mention. stdio.h and the regular C-runtime required only-assuming you're using sscanf and sprintf. You could alternatively create a a pair of conversion tables that would radically speed up the conversions.
Here's a simple piece of code to do the trick:
unsigned int hex_digit_value(char c)
{
if ('0' <= c && c <= '9') { return c - '0'; }
if ('a' <= c && c <= 'f') { return c + 10 - 'a'; }
if ('A' <= c && c <= 'F') { return c + 10 - 'A'; }
return -1;
}
std::string dehexify(std::string const & s)
{
std::string result(s.size() / 2);
for (std::size_t i = 0; i != s.size(); ++i)
{
result[i] = hex_digit_value(s[2 * i]) * 16
+ hex_digit_value(s[2 * i + 1]);
}
return result;
}
Usage:
char const a[] = "12AB";
std::string s = dehexify(a);
Notes:
A proper implementation would add checks that the input string length is even and that each digit is in fact a valid hex numeral.
Dehexifying has nothing to do with ASCII. It just turns any hexified sequence of nibbles into a sequence of bytes. I just use std::string as a convenient "container of bytes", which is exactly what it is.
There are dozens of answers on SO showing you how to go the other way; just search for "hexify".
Each hexadecimal digit corresponds to 4 bits, because 4 bits has 16 possible bit patterns (and there are 16 possible hex digits, each standing for a unique 4-bit pattern).
So, two hexadecimal digits correspond to 8 bits.
And on most computers nowadays (some Texas Instruments digital signal processors are an exception) a C++ char is 8 bits.
This means that each C++ char is represented by 2 hex digits.
So, simply read two hex digits at a time, convert to int using e.g. an istringstream, convert that to char, and append each char value to a std::string.
The other direction is just opposite, but with a twist.
Because char is signed on most systems, you need to convert to unsigned char before converting that value again to hex digits.
Conversion to and from hexadecimal can be done using hex, like e.g.
cout << hex << x;
cin >> hex >> x;
for a suitable definition of x, e.g. int x
This should work for string streams as well.