Output is not what it should be - c++

So there is a program that I am working on, that requires me to access data from a char array containing hex values. I have to use a function called func(), in this example, in order to do access the data structure. Func() contains 3 pointer variables, each of different types, and I can use any of them to access the data in the array. Whichever datatype I choose will affect what values will be stored to the pointer. Soo heres the code:
unsigned char data[]
{
0xBA, 0xDA, 0x69, 0x50,
0x33, 0xFF, 0x33, 0x40,
0x20, 0x10, 0x03, 0x30,
0x66, 0x03, 0x33, 0x40,
}
func()
{
unsigned char *ch;
unsigned int i*;
unsigned short* s;
unsigned int v;
s = (unsigned short*)&data[0];
v = s[6];
printf("val:0x%x \n",v);
}
Output:
Val:0x366
The problem with this output is that it should be 0x0366 with the zero in front of the 3, but it gets cut off at the printf statement, and I'm not allowed to modify that. How else could I fix this?

Use a format that specifies leading zeros: %04x.
Without changing the format passed to printf or replacing it entirely I'm afraid there's no way to affect the output.

Related

Casting to double pointer on Arm Cortex-M3

I am using an Arm Cortex-M3 processor. I receive binary data in an unsigned char array, which must be cast into a suitable variable to be used for further computation:
unsigned char gps[24] = { 0xFA, 0x05, 0x08, 0x00, 0x10, 0x00,0xA4, 0x15, 0x02, 0x42, 0x4D, 0xDF, 0xEB, 0x3F, 0xF6, 0x1A, 0x36, 0xBE, 0xBF, 0x2D, 0xA4, 0x41,
0xAF, 0x1A };
int i = 6;
float f = (float) *(double*)&gps[i];
This code works on a computer to get the correct value of "f" but it fails on the Cortex-M3. I understand that does not have an arithmetic unit on the processor, hence doesn't support 64 bit operations; but is there a work-around to cast as shown above.
Note that the code below works on the processor; only the casting shown above fails:
double d = 9.7;
Also note that 32 bit casts work, as shown below; only double or uint64_t fail.
uint16_t k = *(uint16_t*)&gps[i];
Is there an alternative solution?
Casting the address of an unsigned char to a pointer to a double – and then using it – is violating strict aliasing rules; more importantly (in your case, as discussed below), it also breaks the required alignment rules for accessing multi-byte (i.e. double) data units.
Many compilers will warn about this; clang-cl gives the following for the (double*)&gps[i] expression:
warning : cast from 'unsigned char *' to 'double *' increases required
alignment from 1 to 8 [-Wcast-align]
Now, some architectures aren't too fussy about alignment of data types, and the code may (seem to) work on many of those. However, the Cortex-M3 is very fussy about the alignment requirements for multi-byte data types (such as double).
To remove undefined behaviour, you should use the memcpy function to transfer the component bytes of your gps array into a real double variable, then cast that to a float:
unsigned char gps[24] = {0xFA, 0x05, 0x08, 0x00, 0x10, 0x00,0xA4, 0x15, 0x02, 0x42, 0x4D,
0xDF, 0xEB, 0x3F, 0xF6, 0x1A, 0x36, 0xBE, 0xBF, 0x2D, 0xA4, 0x41, 0xAF, 0x1A };
int i = 6;
double d; // This will be correctly aligned by the compiler
memcpy(&d, &gps[i], sizeof(double)); // This doesn't need source alignment
float f = (float)d; // ... so now, this is a 'safe' cast down to single precision
The memcpy call will use (or generate) code that can access unaligned data safely – even if that means a significant slow-down of the access.

Char array from cin

I have a function in c++ that accepts
char[] arg. If char[] is hardcode like
char t[] = { 0x41, 0x54, 0x2b, 0x4d,
0x53, 0x4c, 0x53, 0x45, 0x43, 0x55,
0x52, 0x3d, 0x31, 0x2c, 0x30 }
it works. I need to be able to get the char[] from cin, the data from cin will be ascii AT+MSLSECUR=1,0 and need to convert it to equivalent of the hardcoded char[] i show above
I dont know where to start i tried simply making cin read to char[] but doesnt seem to work char[] data is wrong
I am new to this so please forgive my lack of knowledge
I guess that's that what you are asking for :
std::string input ;
std::cin << input ;
char *YourCharArray = new char [ input.size() + 1 ] ;
strcpy(YourCharArray , input.c_str());

C++ Send bytes from a string?

I am writing a little program that talks to the serial port. I got the program working fine with one of these lines;
unsigned char send_bytes[] = { 0x0B, 0x11, 0x00, 0x02, 0x00, 0x69, 0x85, 0xA6, 0x0e, 0x01, 0x02, 0x3, 0xf };
However the string to send is variable and so I want make something like this;
char *blahstring;
blahstring = "0x0B, 0x11, 0x00, 0x02, 0x00, 0x69, 0x85, 0xA6, 0x0e, 0x01, 0x02, 0x3, 0xf"
unsigned char send_bytes[] = { blahstring };
It doesn't give me an error but it also doesnt work.. any ideas?
a byte-string is something like this:
char *blahString = "\x0B\x11\x00\x02\x00\x69\x85\xA6\x0E\x01\x02\x03\x0f"
Also, remember that this is not a regular string. It will be wise if you explicitly state it as an array of characters, with a specific size:
Like so:
unsigned char blahString[13] = {"\x0B\x11\x00\x02\x00\x69\x85\xA6\x0E\x01\x02\x03\x0f"};
unsigned char sendBytes[13];
memcpy(sendBytes, blahString, 13); // and you've successfully copied 13 bytes from blahString to sendBytes
not the way you've defined..
EDIT:
To answer why your first send_bytes works, and the second doesn't is this:
The first one, creates an array of individual bytes. Where as, the second one, creates a string of ascii characteres. So the length of first send_bytes is 13 bytes, where as the length of the second send_bytes is much higher, since the sequence of bytes is ascii equivalent of individual characters in the second blahstring.
blahstring is a string of characters.
1st character is 0, 2nd character is x, 3rd character is 0, 4th character is B etc. So the line
unsigned char send_bytes[] = { blahstring };
is an array (assuming that you preform a cast!) will have one item.
But the example that works is an array with the 1st character has a value 0x0B, 2nd character is of value 0x11.

How can I convert 0x70, 0x61, 0x73 ... etc to Pas ... etc in C++?

I am using MSVC++ 2010 Express, and I would love to know how to convert
BYTE Key[] = {0x50,0x61,0x73,0x73,0x77,0x6F,0x72,0x64};
to "Password" I am having a lot of trouble doing this. :( I will use this knowledge to take things such as...
BYTE Key[] { 0xC2, 0xB3, 0x72, 0x3C, 0xC6, 0xAE, 0xD9, 0xB5, 0x34, 0x3C, 0x53, 0xEE, 0x2F, 0x43, 0x67, 0xCE };
And other various variables and convert them accordingly.
Id like to end up with "Password" stored in a char.
Key is an array of bytes. If you want to store it in a string, for example, you should construct the string using its range constructor, that is:
string key_string(Key, Key + sizeof(Key)/sizeof(Key[0]));
Or if you can compile using C++11:
string key_string(begin(Key), end(Key));
To get a char* I'd go the C way and use strndup:
char* key_string = strndup(Key, sizeof(Key)/sizeof(Key[0]));
However, if you're using C++ I strongly suggest you use string instead of char* and only convert to char const* when absolutely necessary (e.g. when calling a C API). See here for good reasons to prefer std::string.
All you are lacking is a null terminator, so after doing this:
char Key_str[(sizeof Key)+1];
memcpy(Key_str,key,sizeof Key);
Key_str[sizeof Key] = '\0';
Key_str will be usable as a regular char * style string.

Hex to String Conversion C++/C/Qt?

I am interfacing with an external device which is sending data in hex format. It is of form
> %abcdefg,+xxx.x,T,+yy.yy,T,+zz.zz,T,A*hhCRLF
CR LF is carriage return line feed
hh->checksum
%abcdefg -> header
Each character in above packet is sent as a hex representation (the xx,yy,abcd etc are replaced with actual numbers). The problem is at my end I store it in a const char* and during the implicit conversion the checksum say 0x05 is converted to \0x05. Here \0 being null character terminates my string. This is perceived as incorrect frames when it is not. Though I can change the implementation to processing raw bytes (in hex form) but I was just wondering whether there is another way out, because it greatly simplifies processing of bytes. And this is what programmers are meant to do.
Also in cutecom (on LINUX RHEL 4) I checked the data on serial port and there also we noticed \0x05 instead of 5 for checksum.
Note that for storing incoming data I am using
//store data from serial here
unsigned char Buffer[SIZE];
//convert to a QString, here is where problem arises
QString str((const char*)Buffer); of \0
QString is "string" clone of Qt. Library is not an issue here I could use STL also, but C++ string library is also doing the same thing. Has somebody tried this type of experiment before? Do share your views.
EDIT
This is the sample code you can check for yourself also:
#include <iostream>
#include <string>
#include <QString>
#include <QApplication>
#include <QByteArray>
using std::cout;
using std::string;
using std::endl;
int main(int argc,char* argv[])
{
QApplication app(argc,argv);
int x = 0x05;
const char mydata[] = {
0x00, 0x00, 0x03, 0x84, 0x78, 0x9c, 0x3b, 0x76,
0xec, 0x18, 0xc3, 0x31, 0x0a, 0xf1, 0xcc, 0x99};
QByteArray data = QByteArray::fromRawData(mydata, sizeof(mydata));
printf("Hello %s\n",data.data());
string str("Hello ");
unsigned char ch[]={22,5,6,7,4};
QString s((const char*)ch);
qDebug("Hello %s",qPrintable(s));
cout << str << x ;
cout << "\nHello I am \0x05";
cout << "\nHello I am " << "0x05";
return app.exec();
}
QByteArray text = QByteArray::fromHex("517420697320677265617421");
text.data(); // returns "Qt is great!"
If your 0x05 is converted to the char '\x05', then you're not having hexadecimal values (that only makes sense if you have numbers as strings anyway), but binary ones. In C and C++, a char is basically just another integer type with very little added magic. So if you have a 5 and assign this to a char, what you get is whatever character your system's encoding defines as the fifth character. (In ASCII, that would be the ENQ char, whatever that means nowadays.)
If what you want instead is the char '5', then you need to convert the binary value into its string representation. In C++, this is usually done using streams:
const char ch = 5; // '\0x5'
std::ostringstream oss;
oss << static_cast<int>(ch);
const std::string& str = oss.str(); // str now contains "5"
Of course, the C std library also provides functions for this conversion. If streaming is too slow for you, you might try those.
I think c++ string classes are usually designed to handle zero-terminated char sequences. If your data is of known length (as it appears to be) then you could use a std::vector. This will provide some of the functionality of a string class, whilst ignoring nulls within data.
As I see you want to eliminate control ASCII symbols. You could do it in the following way:
#include <iostream>
#include <string>
#include <QtCore/QString>
#include <QtCore/QByteArray>
using namespace std;
// test data from your comment
char data[] = { 0x49, 0x46, 0x50, 0x4a, 0x4b, 0x51, 0x52, 0x43, 0x2c, 0x31,
0x32, 0x33, 0x2e, 0x34, 0x2c, 0x54, 0x2c, 0x41, 0x2c, 0x2b,
0x33, 0x30, 0x2e, 0x30, 0x30, 0x2c, 0x41, 0x2c, 0x2d, 0x33,
0x30, 0x2e, 0x30, 0x30, 0x2c, 0x41, 0x2a, 0x05, 0x0d, 0x0a };
// functor to remove control characters
struct toAscii
{
// values >127 will be eliminated too
char operator ()( char value ) const { if ( value < 32 && value != 0x0d && value != 0x0a ) return '.'; else return value; }
};
int main(int argc,char* argv[])
{
string s;
transform( &data[0], &data[sizeof(data)], back_inserter(s), toAscii() );
cout << s; // STL string
// convert to QString ( if necessary )
QString str = QString::fromStdString( s );
QByteArray d = str.toAscii();
cout << d.data(); // QString
return 0;
}
The code above prints the following in console:
IFPJKQRC,123.4,T,A,+30.00,A,-30.00,A*.
If you have continuous stream of data you'll get something like:
IFPJKQRC,123.4,T,A,+30.00,A,-30.00,A*.
IFPJKQRC,123.4,T,A,+30.00,A,-30.00,A*.
IFPJKQRC,123.4,T,A,+30.00,A,-30.00,A*.
IFPJKQRC,123.4,T,A,+30.00,A,-30.00,A*.
IFPJKQRC,123.4,T,A,+30.00,A,-30.00,A*.