Function behave differently with similar input (C++) - c++

I am trying to work with the modbus protocol and right now i am calculating the LRC of the messages. I made a function which worked with no issue whatever i was putting into and then i noticed that id did not worked with one input and i can't find a logical explanation on why this don't work.
The function is :
void LRCstring(std::string example)
{
std::stringstream ss;
std::string hex =example.substr(1, example.length()-5);
std::vector<unsigned char> hexCh;
unsigned int buffer;
int offset = 0;
while (offset < hex.length()) {
ss.clear();
ss << std::hex << hex.substr(offset, 2);
ss >> buffer;
hexCh.push_back(static_cast<unsigned char>(buffer));
offset += 2;
}
unsigned char LRC=0x00;
int i;
for (i=0;i<hexCh.size();i++)
{
LRC=LRC+hexCh[i];
}
LRC = 0xFF-LRC; // 1 complement
LRC = LRC+1; // 2 complement
//std::string s = std::to_string(LRC);
//int deci = atoi(s.c_str());
int deci = LRC;
int reste=deci % 16;
std::string temp;
int partiehexa=(deci-reste)/16;
std::string temp2;
std::cout << "deci : " << deci << std::endl;
std::cout << "reste : " << reste << std::endl;
std::cout << "partiehexa : " << partiehexa << std::endl;
std::stringstream ss2;
ss2 << reste;
ss2 >> temp;
ss2 << partiehexa;
ss2 >> temp2;
if (partiehexa<10) {LRCascii+=temp2;}
if (partiehexa==10) {LRCascii+='A';}
if (partiehexa==11) {LRCascii+='B';}
if (partiehexa==12) {LRCascii+='C';}
if (partiehexa==13) {LRCascii+='D';}
if (partiehexa==14) {LRCascii+='E';}
if (partiehexa==15) {LRCascii+='F';}
if (reste<10) {LRCascii+=temp;}
if (reste==10) {LRCascii+='A';}
if (reste==11) {LRCascii+='B';}
if (reste==12) {LRCascii+='C';}
if (reste==13) {LRCascii+='D';}
if (reste==14) {LRCascii+='E';}
if (reste==15) {LRCascii+='F';}
std::cout << "LRC : " << LRCascii << std::endl;
return;
}
Examples on what is the input and the result when it is working :
input > ":040100130013??\r\n"
The cout display "LRC : D5"
input > ":0401CD6B05??\r\n"
The cout display "LRC : BE"
D5 and BE are the right results.
I tried with other inputs and had no problem until this :
input > ":0403006B0003??\r\n"
The cout display "LRC : B"
input > ":040306022B00000064??\r\n"
The cout display "LRC : 2"
It should be 8B and not simply B and it should be 62 and not simply 2.
We can see that the last part of the LRC is good but the other part is ignored. What is even stranger is that in this case the cout of "partiehexa" is showing "8" and "6", it is not like this int was empty. I fail to understand why this is happening in this case.

To me it seems to a C code. But I did quickly analyze your code. I Think the problem lies where your are calculating the
(partiehexa < 10)
The LRCascii only gets assigned while looping through the "reste" part of the code. In the "partiehexa" part of the code it do not get assigned to anything.
Suggestion: Change the code to the following and your code will work
if (partiehexa<10)
{
std::stringstream ss3;
ss3 << partiehexa;
ss3 >> temp2;
LRCascii += temp2;
}
Best solution would be to rewrite the code in proper optimized way.

To calculate the LRC
Add up all the data bytes in the message (before converting to ASCII and without the initial colon and final CR/LF).
Throw away any bits that carry over 8 bits.
Make the result negative (by twos compliment) to get the LRC byte.
In your example, checksum can be calculated mathematically as below:
String: 040306022B00000064
Checksum: 62
Byte# 1 Hex Value 04 Decimal Value 4
Byte# 2 Hex Value 03 Decimal Value 3
Byte# 3 Hex Value 06 Decimal Value 6
Byte# 4 Hex Value 02 Decimal Value 2
Byte# 5 Hex Value 2B Decimal Value 43
Byte# 6 Hex Value 00 Decimal Value 0
Byte# 7 Hex Value 00 Decimal Value 0
Byte# 8 Hex Value 00 Decimal Value 0
Byte# 9 Hex Value 64 Decimal Value 100
Total Hex Value 9E Decimal Value 158
LRC: Hex Value FFFFFFFF62 Decimal Value -158
So the single Hex LRC Byte in this example is 62.
Prashant..

Related

Converting HEXA 64 bits numbers to int

I'm making a college job, a conversion between hexa numbers enclosed in a stringstream. I have a big hexa number (a private key), and I need to convert to int, to put in a map<int,int>.
So when I run the code, the result of conversion is the same for all the two hexa values inserted, what is incorrect, it should be differente results after conversion. I think it's an int sizes stack problem, because when I insert short hexas, it works greatly. As shown below the hexa has 64 bits.
Any idea to get it working?
int main()
{
unsigned int x;
std::stringstream ss;
ss << std::hex << "0x3B29786B4F7E78255E9F965456A6D989A4EC37BC4477A934C52F39ECFD574444";
ss >> x;
std::cout << "Saida" << x << std::endl;
// output it as a signed type
std::cout << "Result 1: " << static_cast<std::int64_t>(x) << std::endl;
ss << std::hex << "0x3C29786A4F7E78255E9A965456A6D989A4EC37BC4477A934C52F39ECFD573344";
ss >> x;
std::cout << "Saida 2 " << x << std::endl;
// output it as a signed type
std::cout << "Result 2: " << static_cast<std::int64_t>(x) << std::endl;
}
Firstly, the HEX numbers in your examples do not fit into an unsigned int.
You should clear the stream before loading the second HEX number there.
...
std::cout << "Result 1: " << static_cast<std::int64_t>(x) << std::endl;
ss.clear();
ss << std::hex << "0x3C29786A4F7E78255E9A965456A6D989A4EC37BC4477A934C52F39ECFD573344";
ss >> x;
...
Each hexadecimal digit equates to 4 bits (0xf -> 1111b). Those hex strings are both 64 x 4 = 256 bits long. You're looking at a range error.
You need to process the input 16 characters at a time. Each character is 4 bits. The 16 first characters will give you an unsigned 64-bit value. (16x4 is 64)
Then you can put the fist value in a vector or other container and move on to the next 16 characters. If you have questions about string manipulation, search this site for similar questions.

Displaying input character's binary value

I'm trying to understand why I keep getting an unexpected binary result. For example, if I were to write
8
I will get a result of
00111000
and not
00001000
I'm not trying to manipulate to get another result, I'm trying to see what the actual data is for my input and understand why it's giving that input.
I'm using C++ in visual studio with a platform of Win32.
This is my code:
#include <stdio.h>
#include <bitset>
#include <iostream>
using namespace std;
int main() {
char cl;
cout << "The minimum value of char is " << CHAR_MIN << endl;
cout << "The maximum value of char is " << CHAR_MAX << endl;
cout << "The storage size in byte(s) of a char is " << sizeof(cl) << endl;
cout << "Input hexadecimal number in the data type of char for example a" << endl;
scanf_s("%c", &cl, sizeof(cl));
bitset < 8 * sizeof(cl)>charBits(cl);
cout << "The converted binary value is " << charBits << endl;
printf("The converted decimal value is% i \n", cl);
}
The value being input is an ASCII character, and you should convert that value to its corresponding number before printing. in ASCII, the letters 'a'-'f' have the range 97-102, 'A'-'F' have the range 65-70, and '0'-'9' have the range 48-57. So after getting your input, test its ASCII value with if's and subtract accordingly like this:
// Subtracting 87 converts 'a' to 10 and 'f' to 15, the numerical representations
// of those hexadecimal values, which are then converted to binary by bitset.
if (cl >= 97 && cl <= 102)
cl -= 87;
// Subtract 10 less again for the same reason above.
else if (cl >= 65 && cl <= 70)
cl -= 55;
// Subtracting 48 from '0' converts it to the number 0 in memory,
// and subtracting 48 from '9' converts it to the number 9.
else if (cl >= 48 && cl <= 57)
cl -= 48;
You'll notice that this breaks the decimal printing, so you should convert the number to decimal for printing like this cout << static_cast<int>(cl) << endl;.

stringstream >> uint8_t in hex? c++

i am confused by the output of the following code:
uint8_t x = 0, y = 0x4a;
std::stringstream ss;
std::string a = "4a";
ss << std::hex << a;
ss >> x;
std::cout << (int)x << " "<< (int)y << std::endl;
std::cout << x << " "<< y <<std::endl;
std::cout << std::hex << (int)x << " " << (int)y << std::endl;
uint8_t z(x);
std::cout << z;
the output for the above is:
52 74
4 J
34 4a
4
and when we change replace the first line with:
uint16_t x = 0, y = 0x4a;
the output turns into:
74 74
74 74
4a 4a
J
I think i understand what happens but i don't understand why it happens or how i can prevent it/work around it. From my understanding std::hex modifier is somehow undermined because of the type of x, maybe not exactly true at a technical level but it simply just writes the first character it reads.
Background: The input is supposed to be a string of hexadecimal digits, each pair representing a byte( just like a bitmap except in string). I want to be able to read each byte and store it in a uint8_t so i was experimenting with that when i came across this problem. I still can't determine what's the best method of this so if you think what i'm doing is inefficient or unnecessary i would appreciate to know why. Thank you for reading,
ss >> x
is treating uint8_t x as an unsigned char. The ascii value of '4' is (decimal) 52. It's reading the first char of the string "4a" into x as if x were a character. When you switch it to uint16_t, it's treating it as an unsigned short integer type. Same with y.

Printing out the hex vale of an unsigned char array in C++

I want to print out the hex value of an unsigned char array using the cout function.
The most obvious approach would be something like the following.
unsigned char str[] = "foo bar baz\n";
for(unsigned short int i = 0; i < sizeof(str); i++){
std::cout << std::hex << str[i] << std::dec << ' ';
}
std::cout << std::endl;
Suprisingly, this outputs the following string:
foo bar baz
For some reason this does not print out the proper hexadecimal value of str
How can I cout the proper hex value of str?
To cout the proper hex value of an unsigned char, it will need to be converted to an integer first.
unsigned char str[] = "foo bar baz\n";
for(unsigned short int i = 0; i < sizeof(str); i++){
std::cout << std::hex << (int) str[i] << std::dec << ' ';
}
std::cout << std::endl;
Gives the following output.
66 6f 6f 20 62 61 72 20 62 61 7a 00
Which corresponds with the hex value of each unsigned char in str.
An explaination for this can be found in the following std::hex documentation.
std::hex
Sets the basefield format flag for the str stream to hex.
When basefield is set to hex, integer values inserted into the stream are expressed in hexadecimal base (i.e., radix 16). For input streams, extracted values are also expected to be expressed in hexadecimal base when this flag is set.
http://www.cplusplus.com/reference/ios/hex/

3 * 8 = 18 in c++

Im getting a slight math error in my program, which is causing problems in the rest of it, and i dont get why it is happening. in this part of the class, i did not overload the operators for built in types (i hope). if i did, please show me where
this function is meant to calculate the least number of bits needed to store a the number, which is stored in a deque <uint8_t> value as one value, so 0x123456 will be stored as {0x12, 0x34, 0x56}, and the output to integer.bits() should be 21
// all types here are standard, so i dont know whats going on
unsigned int bits(){
unsigned int out = value.size() << 3;
std::cout << out << " " << value.size() << " " << (value.size() << 3) << std::endl;
uint8_t top = 128;
while (!(value.front() & top)){
out--;
top >>= 1;
}
return out;
}
yet, this part is couting
8 1 8
16 2 16
...
and finally,
18 3 18
http://ideone.com/zLfz2
3*8 is 24, and in hex that's 0x18. You have std::hex scattered about your code...
Someone has changed your stream to hex and it's printing 24 decimal as 18 hex.