Sending Hex value in win32 not working similar to c++ code - c++

I have c++ code as follows to display hex value from int array.
#include <iostream>
using namespace std;
int main()
{
int binText[32]={1,0,1,0,1,1,1,1,0,0,0,1,1,0,1,0,1,1,1,1,0,1,0,1,1,1,1,1,0,0,0,1};
char temp[255]={0};
for (int i=0; i<32; i++)
{
sprintf(&temp[strlen(temp)], "%d", binText[i]);
}
char HexBuffer[255];
unsigned long long int Number = 0;
int BinLength = strlen(temp);
for(int i=0; i<32; i++)
{
Number += (long int)((temp[32 - i - 1] - 48) * pow((double)2, i));
}
ltoa(Number, HexBuffer, 16);
cout << HexBuffer <<endl;
}
its output is: af1af5f1
So this code converted the binary digit stored in int array into hex value.
But when i tried to use this same code to send the hex value in serial communication using win32 . it is not sending the correct hex value. the code is
serialObj.begin("COM1", 9600); //opens the port
int binText[32]={1,0,1,0,1,1,1,1,0,0,0,1,1,0,1,0,1,1,1,1,0,1,0,1,1,1,1,1,0,0,0,1};
char temp[255]={0};
for (int i=0; i<32; i++)
{
sprintf(&temp[strlen(temp)], "%d", binText[i]);
}
char HexBuffer[255];
unsigned long long int Number = 0;
int BinLength = strlen(temp);
for(int i=0; i<32; i++)
{
Number += (long int)((temp[32 - i - 1] - 48) * pow((double)2, i));
}
ltoa(Number, HexBuffer, 16);
serialObj.send(HexBuffer);
serialObj.close();//closes the port
The "send" function invoked by "serialObj.send(HexBuffer);" is as below:
void serial::send(char data[])
{
DWORD dwBytesWrite;
WriteFile(serialHandle, data, 4, &dwBytesWrite, NULL);
}
But the data it is sending is : "61 66 31 61". I couldnot figure out why it is giving this output .
The "send" function "serialObj.send" works properly for following code
char str[4]={0x24,0x24,0x53,0x3F};
serialObj.send(str);
and it sends 24 24 53 3F.
So i want to send AF 1A F5 F1 from the above the binary digit stored in the int array(shown above). how can i do this

If you want to send the actual binary bits, don't call ltoa at all:
serialObj.send((char *)&Number);
Note that Number is declared as long long, which is likely 64 bits, which isn't going to fit in the 4 bytes sent by your serial::send() function.

Related

What is the difference with AES128 and AES256 regarding key expansion and byes generation? (and maybe AES192)

If I am not wrong, for AES128, we generate 16 bytes of data, hence the code for it is shown below.
void KeyExpansion(unsigned char inputKey[16], unsigned char expandedKeys[176])
{
for(int i=0; i<16; i++)
expandedKeys[i] = inputKey[i];
int bytesGenerated = 16;
int rconIteration = 1;
unsigned char temp[4];
while(bytesGenerated <176)
{
for(int i=0; i<4; i++)
temp[i] = expandedKeys[i + bytesGenerated - 4];
if(bytesGenerated % 16 == 0)
{
keyExpansionCore(temp, rconIteration);
rconIteration++;
}
for(unsigned char a=0; a<4; a++)
{
expandedKeys[bytesGenerated] = expandedKeys[bytesGenerated - 16] ^ temp[a];
bytesGenerated++;
}
}
}
However, I am not too sure about AES256. Do we also generate 16 bytes of data or 32 bytes of data? If it generates 32 bytes of data, I have to change my inputKey[16] to inputKey[32] instead? And what about the expandedKey? I was confused when I saw expandedKey[176] with the inputKey[16], doesn't it becomes 192 bytes? (But I saw this code while searching for a AES128.)
It's the key size that is different, not the block size (which is always 16 bytes). The key size in AES can be 128, 192, or 256 bits.
More info and also see this and this StackExchange answer.

Bitwise operator to calculate checksum

Am trying to come up with a C/C++ function to calculate the checksum of a given array of hex values.
char *hex = "3133455D332015550F23315D";
For e.g., the above buffer has 12 bytes and then last byte is the checksum.
Now what needs to done is, convert the 1st 11 individual bytes to decimal and then take there sum.
i.e., 31 = 49,
33 = 51,.....
So 49 + 51 + .....................
And then convert this decimal value to Hex. And then take the LSB of that hex value and convert that to binary.
Now take the 2's complement of this binary value and convert that to hex. At this step, the hex value should be equal to 12th byte.
But the above buffer is just an example and so it may not be correct.
So there're multiple steps involved in this.
Am looking for an easy way to do this using bitwise operators.
I did something like this, but it seems to take the 1st 2 bytes and doesn't give me the right answer.
int checksum (char * buffer, int size){
int value = 0;
unsigned short tempChecksum = 0;
int checkSum = 0;
for (int index = 0; index < size - 1; index++) {
value = (buffer[index] << 8) | (buffer[index]);
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFFFF) + 1) & 0xFFFF;
}
I couldn't get this logic to work. I don't have enough embedded programming behind me to understand the bitwise operators. Any help is welcome.
ANSWER
I got this working with below changes.
for (int index = 0; index < size - 1; index++) {
value = buffer[index];
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFF) + 1) & 0xFF;
Using addition to obtain a checksum is at least weird. Common checksums use bitwise xor or full crc. But assuming it is really what you need, it can be done easily with unsigned char operations:
#include <stdio.h>
char checksum(const char *hex, int n) {
unsigned char ck = 0;
for (int i=0; i<n; i+=1) {
unsigned val;
int cr = sscanf(hex + 2 * i, "%2x", &val); // convert 2 hexa chars to a byte value
if (cr == 1) ck += val;
}
return ck;
}
int main() {
char hex[] = "3133455D332015550F23315D";
char ck = checksum(hex, 11);
printf("%2x", (unsigned) (unsigned char) ck);
return 0;
}
As the operation are made on an unsigned char everything exceeding a byte value is properly discarded and you obtain your value (26 in your example).

Using c++ is it possible to convert an Ascii character to Hex?

I have written a program that sets up a client/server TCP socket over which the user sends an integer value to the server through the use of a terminal interface. On the server side I am executing byte commands for which I need hex values stored in my array.
sprint(mychararray, %X, myintvalue);
This code takes my integer and prints it as a hex value into a char array. The only problem is when I use that array to set my commands it registers as an ascii char. So for example if I send an integer equal to 3000 it is converted to 0x0BB8 and then stored as 'B''B''8' which corresponds to 42 42 38 in hex. I have looked all over the place for a solution, and have not been able to come up with one.
Finally came up with a solution to my problem. First I created an array and stored all hex values from 1 - 256 in it.
char m_list[256]; //array defined in class
m_list[0] = 0x00; //set first array index to zero
int count = 1; //count variable to step through the array and set members
while (count < 256)
{
m_list[count] = m_list[count -1] + 0x01; //populate array with hex from 0x00 - 0xFF
count++;
}
Next I created a function that lets me group my hex values into individual bytes and store into the array that will be processing my command.
void parse_input(char hex_array[], int i, char ans_array[])
{
int n = 0;
int j = 0;
int idx = 0;
string hex_values;
while (n < i-1)
{
if (hex_array[n] = '\0')
{
hex_values = '0';
}
else
{
hex_values = hex_array[n];
}
if (hex_array[n+1] = '\0')
{
hex_values += '0';
}
else
{
hex_values += hex_array[n+1];
}
cout<<"This is the string being used in stoi: "<<hex_values; //statement for testing
idx = stoul(hex_values, nullptr, 16);
ans_array[j] = m_list[idx];
n = n + 2;
j++;
}
}
This function will be called right after my previous code.
sprint(mychararray, %X, myintvalue);
void parse_input(arrayA, size of arrayA, arrayB)
Example: arrayA = 8byte char array, and arrayB is a 4byte char array. arrayA should be double the size of arrayB since you are taking two ascii values and making a byte pair. e.g 'A' 'B' = 0xAB
While I was trying to understand your question I realized what you needed was more than a single variable. You needed a class, this is because you wished to have a string that represents the hex code to be printed out and also the number itself in the form of an unsigned 16 bit integer, which I deduced would be something like unsigned short int. So I created a class that did all this for you named hexset (I got the idea from bitset), here:
#include <iostream>
#include <string>
class hexset {
public:
hexset(int num) {
this->hexnum = (unsigned short int) num;
this->hexstring = hexset::to_string(num);
}
unsigned short int get_hexnum() {return this->hexnum;}
std::string get_hexstring() {return this->hexstring;}
private:
static std::string to_string(int decimal) {
int length = int_length(decimal);
std::string ret = "";
for (int i = (length > 1 ? int_length(decimal) - 1 : length); i >= 0; i--) {
ret = hex_arr[decimal%16]+ret;
decimal /= 16;
}
if (ret[0] == '0') {
ret = ret.substr(1,ret.length()-1);
}
return "0x"+ret;
}
static int int_length(int num) {
int ret = 1;
while (num > 10) {
num/=10;
++ret;
}
return ret;
}
static constexpr char hex_arr[16] = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
unsigned short int hexnum;
std::string hexstring;
};
constexpr char hexset::hex_arr[16];
int main() {
int number_from_file = 3000; // This number is in all forms technically, hex is just another way to represent this number.
hexset hex(number_from_file);
std::cout << hex.get_hexstring() << ' ' << hex.get_hexnum() << std::endl;
return 0;
}
I assume you'll probably want to do some operator overloading to make it so you can add and subtract from this number or assign new numbers or do any kind of mathematical or bit shift operation.

C++ Bytes To Bits Conversion And Then Print

Code Taken From: Bytes to Binary in C Credit: BSchlinker
The following code I modified to take more than 1 Byte at a time. I modified it, and got it half working and then got really confused on my loops. :( Ive spent the last day and a half trying to figure it out... but my C++ skills are not really that good (still learning!)
#include <iostream>
using namespace std;
char show_binary(unsigned char u, unsigned char *result,int len);
int main()
{
unsigned char p40[3] = {0x40, 0x00, 0x0a};
unsigned char bits[8*(sizeof(p40))];
int c;
c=sizeof(p40);
show_binary(*p40, bits, 3);
cout << "\n\n";
cout << "BIN = ";
do{
for (int i = 0; i < 8; i++)
printf("%d",bits[i+(8*c)]);
c++;
}while(c < 3);
cout << "\n";
int a;
cin >> a;
return 0;
}
char show_binary(unsigned char u, unsigned char *result, int len)
{
unsigned char mask = 1;
unsigned char bits[8*sizeof(result)];
int a,b,c;
a=0;
b=0;
c=len;
do{
for (int i = 0; i < 8; i++)
bits[i+(8*a)] = (u[&a] & (mask << i)) != 0;
a++;
}while(a < len);
//Need to reverse it?
do{
for (int i = 8; i != -1; i--)
result[i+(8*c)] = bits[i+(8*c)];
b++;
c--;
}while(b < len);
return *result;
}
After I spit out:
cout << "BIN = ";
do{
for (int i = 0; i < 8; i++)
printf("%d",bits[i+(8*c)]);
c++;
}while(c < 3);
Id like to take bit[11] ~ bit[the end] and compute a BYTE every 8 bits. If that makes sense. But first the function should work. Any pro tips on how this should be done? And of course, rip my code apart. I like to learn.
Man, there is a lot going on in this code, so it's hard to know where to start. Suffice to say, you're trying a bit too hard. It sounds like you are trying to 1) pass in a byte array; 2) turn those bytes into a string representation of the binary; and 3) turn that string representation back into a value?
It just so happens I recently did something similar to this in C, which should still work using a C++ compiler.
#include <stdio.h>
#include <string.h>
/* A macro to get a substring */
#define substr(dest, src, dest_size, startPos, strLen) snprintf(dest, dest_size, "%.*s", strLen, src+startPos)
/* Pass in char* array of bytes, get binary representation as string in bitStr */
void str2bs(const char *bytes, size_t len, char *bitStr) {
size_t i;
char buffer[9] = "";
for(i = 0; i < len; i++) {
sprintf(buffer,
"%c%c%c%c%c%c%c%c",
(bytes[i] & 0x80) ? '1':'0',
(bytes[i] & 0x40) ? '1':'0',
(bytes[i] & 0x20) ? '1':'0',
(bytes[i] & 0x10) ? '1':'0',
(bytes[i] & 0x08) ? '1':'0',
(bytes[i] & 0x04) ? '1':'0',
(bytes[i] & 0x02) ? '1':'0',
(bytes[i] & 0x01) ? '1':'0');
strncat(bitStr, buffer, 8);
buffer[0] = '\0';
}
}
To get the string of binary back into a value it can by done with bit shifting:
unsigned char bs2uc(char *bitStr) {
unsigned char val = 0;
int toShift = 0;
int i;
for(i = strlen(bitStr)-1; i >= 0; i--) {
if(bitStr[i] == '1') {
val = (1 << toShift) | val;
}
toShift++;
}
return val;
}
Once you had a binary string you could then take substrings of any arbitrary 8 bits (or less, I guess) and turn them back into bytes.
char *bitStr; /* Let's pretend this is populated with a valid string */
char byte[9] = "";
substr(byte, bitStr, 9, 4, 8);
/* This would create a substring of length 8 starting from index 4 of bitStr */
unsigned char b = bs2uc(byte);
I've actually created a whole suite of value -> binary string -> value functions if you'd like to take a look at them. GitHub - binstr

how do I convert an integer which is defined in an int array to hex?

I have an int array that represents a very large number such as:
// ...
unsigned int n1[200];
// ...
n1 = {1,3,4,6,1,...} ==> means my number is 13461...
How can I convert that large number to its hex value?
So here is my take on the problem:
You have an array of digits.
You want to build an unsigned int from this array of digits.
The array of digits could be either HEX digits, or DECIMAL digits.
To build this unsigned long long, assuming an array of DECIMAL digits:
unsigned long long myNum = 0;
unsigned int n1[200];
for (int i=0; i < n1.length ; i++ ){
myNum += pow(10,i) * n1[n1.length - i];
}
To build this unsigned long long, assuming an array of HEX digits:
for (int i=0; i < n1.length ; i++ ){
myNum += pow(16,i)* n1[n1.length - i];
}
(Notice the base 16)
Disclaimer: limited to exactly 16 digits MAX stored in your array. After that you will overrun the buffer
If it is just a matter of DISLAYING the number in the correct format...
Well, an int is an int is an int... (in memory).
There are 10 fingers on my hands whether or not I call that number 10, or A.
If you want to format the number for DISPLAY in hex, then try something like:
unsigned int i = 10;
//OR
unsigned int i = 0xA;
printf("My number in hex: %x", i);
printf("My number in decimal: %d", i);
I'm unsure if you want the hexadecimal represented as a string. If that's
the case, here's some code:
#include <iostream>
#include <stack>
using namespace std;
string toHexa(int num){
string digit = "0123456789ABCDEF", numStr = "";
stack<char> s;
do {
s.push(digit[num%16]);
num /= 16;
} while (num != 0);
while (!s.empty()){
numStr += s.top();
s.pop();
}
return numStr;
}
int main(){
int num = 235; // EB in hexa
cout << num << " to hexadecimal: " << toHexa(num) << endl;
return 0;
}
You could use the GMP library to make this relatively straightforward.
Use basic_stringstream<unsigned int> to wrap your array.
Use operator << to read it into a mpz_t variable.
Create another basic_stringstream<unsigned int> for your result.
Use std::hex and operator >> to write the variable back out in hexadecimal.
That would work on ASCII digits, but yours aren't. You can still use GMP, but you'll want to use the mpn_get_str and mpn_set_str functions instead. You'll need to copy your digits into an unsigned char[] and then you can specify the base for conversion to mp_limb_t and back to a string of digits.