How can I store hexadecimals inside an array? C++ MFC - c++

I have to use an array of hexadecimals because I'm doing a program to communicate with a video server controller and he just understands messages in hexadecimal. I can connect the video controller with my server, but when I try to send messages using the send() function, passing an array of unsigned char that contains my information in hexadecimal, it doesn't work.
This is how I am using the array. I don't know if it is correct.
void sendMessage()
{
int retorno;
CString TextRetorno;
unsigned char HEX_bufferMessage[12]; // declaration
// store info
HEX_bufferMessage[0] = 0xF0;
HEX_bufferMessage[1] = 0x15;
HEX_bufferMessage[2] = 0x31;
HEX_bufferMessage[3] = 0x02;
HEX_bufferMessage[4] = 0x03;
HEX_bufferMessage[5] = 0x00;
HEX_bufferMessage[6] = 0x00;
HEX_bufferMessage[7] = 0xD1;
HEX_bufferMessage[8] = 0xD1;
HEX_bufferMessage[9] = 0x00;
HEX_bufferMessage[10] = 0x00;
HEX_bufferMessage[11] = 0xF7;
retorno = send(sckSloMo, (const char*) HEX_bufferMessage, sizeof(HEX_bufferMessage), 0);
TextRetorno.Format("%d", retorno);
AfxMessageBox(TextRetorno); // value = 12
if (retorno == SOCKET_ERROR)
{
AfxMessageBox("Error Send!! =[ ");
return;
}
return;
}

Pop quiz. What's the difference between:
int n = 0x0F;
and:
int n = 15;
If you said, "nothing," you're correct.
When assigning integral values, specifying 0x, 00 for octal, or nothing for decimal makes no difference in what is actually stored. This is a convenience for you, the programmer only. These are integral variables we're talking about -- they store numeric data only. They don't store or care about radix. In fact, you might be surprised to learn that when you assigned a numeric value to an integral variable, what is actually stored isn't decimal or hexadecimal or even octal -- it's binary.
Since you're storing these values as unsigned char, and char (unsigned or otherwise) is really just an integral type, then what you're doing is fine:
HEX_bufferMessage[0] = 0xF0;
HEX_bufferMessage[1] = 0x15;
HEX_bufferMessage[2] = 0x31;
but your question makes no sense:
Anyone knows if using an array of unsigned char is the right way to
store hexadecimals??

Related

how do I convert a single char to a uint in a well defined way and cross platform way?

Let's say I have a single char:
char myChar = 'A';
and I want to populate an uint:
uint8_t myUint8 = 0; // 0 is just a default;
is it well defined to do this:
myUint8 = static_cast<uint8_t>(myChar);
So in this case I would expect
myUint8 == 65
to evaluate to true. Is this well defined? Will this function cross platform? If not how can I make it well defined and portable?
Update: While I would like to handle the problem of determining the encoding scheme and if uint8_t is available on target eventually, they are beyond the scope of this question, hence I would like to refine my example as follows:
char myChar = 'A';
unsigned myUnsigned = 0; // 0 is just a default;
myUnsigned = static_cast<unsigned>(myChar);
char myRoundTripChar = static_cast<char>(myUnsigned);
printf("char is unchanged: %s", myRoundTripChar == 'A' ? "true" : "false" );
the desired output here is true. This alternate example removes all the 'magic numbers' and possibly defined types I think to show the problem another way.

Initializer fails to determine size of errror

I'm new to a little new to programming, how do I store a variable in message? I'm trying to make a wireless temperature sensor using LoRa, with an Arduino Uno + Dragino shield. The results to be displayed on the things network. Everything else is working fine. Why do I get the error written below? {temp} does not work either.
CODE:
int temp = 25;
// Payload to send (uplink)
static uint8_t message[] = temp;
Error:
HelloWorld1:77: error: initializer fails to determine size of 'message'
static uint8_t message[] = temp;
^
HelloWorld1:77: error: array must be initialized with a brace-enclosed initializer
Multiple libraries were found for "lmic.h"
Used: C:\Users\\Documents\Arduino\libraries\arduino-lmic-master
Not used: C:\Program Files (x86)\Arduino\libraries\arduino-lmic-master
exit status 1
initializer fails to determine size of 'message'
The compiler has to know the size of the array when you declare it. It can find it out either directly from the value in [] (e.g. uint8_t message [2]) either, if there isn’t any value there, from the length of a braced-enclosed initializer, i.e. the list of comma-separated values inside a { } that you may assign to the array at declaration.
That aside, you can’t directly store an int value (2 bytes, signed) in an uint8_t (1 byte, unsigned). Since (I suppose) you need to transmit data as an uint8_t array you can do as follows:
int temp = 25;
// store temp in a uint8_t array with two elements (needed to store two bytes)
uint8_t message[2];
message[0] = temp >> 8; // Most significant byte, MSB
message[1] = temp; // Least significant byte, LSB
or
int temp = 25;
// store temp in a uint8_t array with two elements (needed to store two bytes)
uint8_t message[2] = {(temp >> 8), temp};
message[0] = temp >> 8; // Most significant byte, MSB
message[1] = temp; // Least significant byte, LSB
Theb transmit message, and on the receiver “reconvert” it to an int: temp = (message[0] << 8) | message[1];.

Convert array values of 1's and 0's to binary

In Arduino IDE, I am placing all of input values to an array like so:
int eOb1 = digitalRead(PrOb1);
int eLoop = digitalRead(PrLoop);
int eOb2 = digitalRead(PrOb2);
InputValues[0] = eOb1;
InputValues[1] = eLoop;
InputValues[2] = eOb2;
InputValues[3] = 0;
InputValues[4] = 0;
InputValues[5] = 0;
InputValues[6] = 0;
InputValues[7] = 0;
I would like to convert it to a byte array like so: 00000111.
Can you show me please. I tried using a for Loop to iterate through the values but it doesn't work.
char bin[8];
for(int i = 0; i < 8; ++i) {
bin &= InputValues[i];
}
If I understand your requirement correctly, you have an array of individual bits and you need to convert it into a byte that has the corresponding bits.
So to start, you should declare bin to be of type unsigned char instead of char[8]. char[8] means an array of 8 bytes, whereas you only need a single byte.
Then you need to initialize it to 0. (This is important since |= needs the variable to have some defined value).
unsigned char bin;
Now, unsigned char is guaranteed to have 1 byte but not 8 bits. So you should use something like uint8_t IF it is available.
Finally you can set the appropriate bits in bin as -
for(int i = 0; i < 8; ++i) {
bin |= (InputValues[i] << i);
}
There are two things I have changed.
I used |= instead of &=. This is the bitwise OR operator. You need to use OR because it only sets the correct bits in the LHS and leaves other bits untouched. An AND won't necessarily set that bit and will also mask away (set to 0), the other bits.
Shifted the bit in the array to the corresponding position using << i.

Convert hex integer into form "\x" (c++ - memory)

DWORD FindPattern(DWORD base, DWORD size, char *pattern, char *mask)
{
// Get length for our mask, this will allow us to loop through our array
DWORD patternLength = (DWORD)strlen(mask);
for (DWORD i = 0; i < size - patternLength; i++)
{
bool found = true;
for (DWORD j = 0; j < patternLength; j++)
{
// If we have a ? in our mask then we have true by default,
// or if the bytes match then we keep searching until finding it or not
found &= mask[j] == '?' || pattern[j] == *(char*)(base + i + j);
}
// Found = true, our entire pattern was found
// Return the memory addy so we can write to it
if (found)
{
return base + i;
}
}
return NULL;
}
Above is my FindPattern function that I use to find bytes in a given section of memory, here's how I call the function:
DWORD PATTERN = FindPattern(0xC0000000, 0x20000,"\x1F\x37\x66\xE3", "xxxx");
PrintStringBottomCentre("%02x", PATTERN);
Now, say I had an integer for example: 0xDEADBEEF
I want to convert this into a char pointer like: "\xDE\xAD\xBE\xEF", this is so that I can put it into my FindPattern function. How would I do this?
You have to be careful here. On many architectures including x86, ints are stored using little endian, meaning that the int 0xDEADBEEF is stored in memory in this order: EF BE AD DE. But the char array is stored in the order DE AD BE EF.
So the question is, are you trying to find an int 0xDEADBEEF stored in memory, or do you actually want the sequence of bytes DE AD BE EF?
If you want the int, don't use a char* array at all. Pass in your pattern and mask as DWORDs, and you can simplify that function a lot.
If you want to find the sequence of bytes, then don't store it as an int in the first place. Just get the input as a char array and pass it directly in as your pattern.
Edit: you can try something like this, which I think will give you what you want:
int a = 0xDEADBEEF;
char pattern[4];
pattern[0] = (a >> 24) & 0xFF;
pattern[1] = (a >> 16) & 0xFF;
pattern[2] = (a >> 8) & 0xFF;
pattern[3] = a & 0xFF;
The \ character in C/C++ is an escape character, so anything that follows it is translated to the escape character you want, hex conversion (\x) in your string. In order to avoid that, add another \ before it so it will be considered as a normal character.
Ex.) \\xDE\\xAD\\xBE\\xEF

C Code Acting Differently to C++ on Lookup

I have the following code block (NOT written by me), which performs mapping and recodes ASCII characters to EBCDIC.
// Variables.
CodeHeader* tchpLoc = {};
...
memset(tchpLoc->m_ucpEBCDCMap, 0xff, 256);
for (i = 0; i < 256; i++) {
if (tchpLoc->m_ucpASCIIMap[i] != 0xff) {
ucTmp2 = i;
asc2ebn(&ucTmp1, &ucTmp2, 1);
tchpLoc->m_ucpEBCDCMap[ucTmp1] = tchpLoc->m_ucpASCIIMap[i];
}
}
The CodeHeader definition is
typedef struct {
...
UCHAR* m_ucpASCIIMap;
UCHAR* m_ucpEBCDCMap;
} CodeHeader;
and the method that seems to be giving me problems is
void asc2ebn(char* szTo, char* szFrom, int nChrs)
{
while (nChrs--)
*szTo++ = ucpAtoe[(*szFrom++) & 0xff];
}
[Note, the unsigned char array ucpAtoe[256] is copied at the end of the question for reference].
Now, I have an old C application and my C++11 conversion running side-by-side, the two codes write a massive .bin file and there is a tiny discrepancy which I have traced to the above code. What is happening for both codes is that the block
...
if (tchpLoc->m_ucpASCIIMap[i] != 0xff) {
ucTmp2 = i;
asc2ebn(&ucTmp1, &ucTmp2, 1);
tchpLoc->m_ucpEBCDCMap[ucTmp1] = tchpLoc->m_ucpASCIIMap[i];
}
gets entered into for i = 32 and the asc2ebn method returns ucTmp1 as 64 or '#' for both C and C++ variants great. The next entry is for i = 48, for this value the asc2ebn method returns ucTmp1 as 240 or 'ð' and the C++ code returns ucTmp1 as -16 or 'ð'. My question is why is this lookup/conversion producing different results for exactly the same input and look up array (copied below)?
In this case the old C code is taken as correct, so I want the C++ to produce the same result for this lookup/conversion. Thanks for your time.
static UCHAR ucpAtoe[256] = {
'\x00','\x01','\x02','\x03','\x37','\x2d','\x2e','\x2f',/*00-07*/
'\x16','\x05','\x25','\x0b','\x0c','\x0d','\x0e','\x0f',/*08-0f*/
'\x10','\x11','\x12','\xff','\x3c','\x3d','\x32','\xff',/*10-17*/
'\x18','\x19','\x3f','\x27','\x22','\x1d','\x35','\x1f',/*18-1f*/
'\x40','\x5a','\x7f','\x7b','\x5b','\x6c','\x50','\xca',/*20-27*/
'\x4d','\x5d','\x5c','\x4e','\x6b','\x60','\x4b','\x61',/*28-2f*/
'\xf0','\xf1','\xf2','\xf3','\xf4','\xf5','\xf6','\xf7',/*30-37*/
'\xf8','\xf9','\x7a','\x5e','\x4c','\x7e','\x6e','\x6f',/*38-3f*/
'\x7c','\xc1','\xc2','\xc3','\xc4','\xc5','\xc6','\xc7',/*40-47*/
'\xc8','\xc9','\xd1','\xd2','\xd3','\xd4','\xd5','\xd6',/*48-4f*/
'\xd7','\xd8','\xd9','\xe2','\xe3','\xe4','\xe5','\xe6',/*50-57*/
'\xe7','\xe8','\xe9','\xad','\xe0','\xbd','\xff','\x6d',/*58-5f*/
'\x79','\x81','\x82','\x83','\x84','\x85','\x86','\x87',/*60-67*/
'\x88','\x89','\x91','\x92','\x93','\x94','\x95','\x96',/*68-6f*/
'\x97','\x98','\x99','\xa2','\xa3','\xa4','\xa5','\xa6',/*70-77*/
'\xa7','\xa8','\xa9','\xc0','\x6a','\xd0','\xa1','\xff',/*78-7f*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*80-87*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*88-8f*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*90-97*/
'\xff','\xff','\xff','\x4a','\xff','\xff','\xff','\xff',/*98-9f*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*a0-a7*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*a8-af*/
'\xff','\xff','\xff','\x4f','\xff','\xff','\xff','\xff',/*b0-b7*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*b8-bf*/
'\xff','\xff','\xff','\xff','\xff','\x8f','\xff','\xff',/*c0-c7*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*c8-cf*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*d0-d7*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*d8-df*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*e0-e7*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff',/*e8-ef*/
'\xff','\xff','\xff','\x8c','\xff','\xff','\xff','\xff',/*f0-f7*/
'\xff','\xff','\xff','\xff','\xff','\xff','\xff','\xff' };
In both C and C++, the standard doesn't require char to be a signed or unsigned type. It's implementation defined, and apparently, your C compiler decided char to be unsigned char, while your C++ compiler decided it to be signed char.
For GCC, the flag to make char to be unsigned char is -funsigned-char. For MSVC, it's /J.