linux writing string value to hex, difference between x86 and i368? - c++

relating to my post linux writing string value as hex to serial (which was solved on a x86 System)
I have a problem on an i368 system.
The code works on an intel atom ubuntu system.
But on an intel xeon with ubuntu this doesnt work anymore.
Could that be a problem o little/big endian?
I used the code of the mentioned post from Erik.
I am using this method to convert
int convert(std::string str, unsigned char*& charArr)
{
// count the number of character pairs (i.e. bytes) in the string
// and dynamically allocate an array of the required size
const int numBytes = str.size() / 2;
charArr = new unsigned char[numBytes];
for (int i = 0; i < numBytes; ++i)
{
// grab two characters from the string...
std::string twoChars = str.substr(2 * i, 2);
// ...and convert them to an integer using a stringstream
int byte;
std::stringstream ss(twoChars);
ss >> std::hex >> byte;
// store the result in our char array
charArr[i] = byte;
}
}
Calling the method by
const std::string str ="8C000002008E";
unsigned char* toSend;
convert(str, toSend);
And writing to the serial Port with:
int kWrite = write(mFd, toSend , sizeof(toSend));
The serial device react on the atom system, but not on the xeon.
The response i get says: undefined format or checksum error

Related

How to store a unsigned char array to float value?

I am trying to take the sensor data from Arduino & Raspberry Pi using RS232 serial communication. I have searched for this small thing and found something related on this below link but was unable get the full idea.
The os (kernel) has an internal buffer of 4096 bytes. If this buffer is full and a new character arrives on the serial port, the oldest character in the buffer will be overwritten and thus will be lost. After a successful call to RS232_OpenComport(), the os will start to buffer incoming characters.
The values are properly coming from Arduino to Raspberry Pi (output attached below) and it is storing in a pointer to unsigned char[] which is defined as unsigned char *buf[4096].
int main()
{
int i, n,
cport_nr=0, /* /dev/ttyS0 (COM1 on windows) */
bdrate=9600; /* 9600 baud */
unsigned char buf[4096];
char mode[]={'8','N','1',0};
while(1)
{
n = RS232_PollComport(cport_nr, buf, 4095);
if(n > 0)
{
buf[n] = 0;
for(i=0; i < n; i++)
{
if(buf[i] < 32) /* replace unreadable control-codes by dots */
{
buf[i] = '.';
}
}
printf("received %i bytes: %s\n", n, (char *)buf);
}
}
Now I want to store these values in another float/double variable so that I can perform further operations on it. How to store a value suppose 0.01 to a float/double which is later used to create stuff.
From the output in the screenshot it looks like you are sending the string representation of the numbers rather than the actual numbers. You just need to detect those "unreadable control-codes" that you are just replacing with a . as they will probably tell you when a number ends and another begins. Just make QSerialPort * serial; a proper class member.
Also, check for errors on opening the port: serial->open(QIODevice::ReadWrite); Then, insert some qDebug() in serialreceived() to see if the slot is called at all and if the canReadLine() works. you should use QByteArray to read your data. If there's any char in the response, that is not String conform, the resulting QString will be prematurely terminated, use readLine() instead readAll() like this:
QByteArray data = serial -> readLine();
qDebug() < data.toHex(' '); // prints the hex representation of your char array
QString str(data);
qDebug() << str;
First, it will be better if you use some other ASCII character (e.g. space) to separate the numbers, because . dot is a part of floating point number. Then, you can construct std::string object from your raw unsigned char array, split it in a multiple strings and convert each string to float.
#include <boost/algorithm/string/classification.hpp>
#include <boost/algorithm/string/split.hpp>
int main() {
// imagine that this buff is already after read and preprocessing
unsigned char buff[1024] = "13.60 13.60 -11.12 -0.3 and let's say that the rest is garbage";
int n = 28; // let's say that you received 28 bytes
std::string strBuff(reinterpret_cast<char*>(buff), n); // construct a string from buff using just first 28 bytes
std::vector<std::string> numbers;
boost::split(numbers, strBuff, boost::is_any_of(" "), boost::token_compress_on);
for (const auto& n : numbers) {
try {
std::cout << std::stof(n) << std::endl;
} catch (const std::exception& e) {
std::cout << n << " is not convertible to float: " << e.what() << std::endl;
}
}
return 0;
}
I took the string splitting method from this answer but you can use anything that works for you.
I used reinterpret_cast because std::string accepts char instead of unsigned char as a CTor arg.

Array of bytes into a string of comma separated int

I have an Arduino that controls timers. The settings for timers are stored in byte arrays. I need to convert the arrays to strings to SET a string on an external Redis server.
So, I have many arrays of bytes of different lengths that I need to convert to strings to pass as arguments to a function expecting char[]. I need the values to be separated by commas and terminated with '\0'.
byte timer[4] {1,5,23,120};
byte timer2[6] {0,0,0,0,0,0}
I have succeeded to do it manually for each array using sprintf() like this
char buf[30];
for (int i=0;i<5;i++){ buf[i] = (int) timer[i]; }
sprintf(buf, "%d,%d,%d,%d,%d",timer[0],timer[1],timer[2],timer[3],timer[4]);
That gives me an output string buf: 1,5,23,120
But I have to use a fixed number of 'placeholders' in sprintf().
I would like to come up with a function to which I could pass the name of the array (e.g. timer[]) and that would build a string, probably using a for loop of 'variable lengths' (depending of the particular array to to 'process') and many strcat() functions. I have tried a few ways to do this, none of them making sense to the compiler, nor to me!
Which way should I go looking?
Here is the low tech way you could do it in normal C.
char* toString(byte* bytes, int nbytes)
{
// Has to be static so it doesn't go out of scope at the end of the call.
// You could dynamically allocate memory based on nbytes.
// Size of 128 is arbitrary - pick something you know is big enough.
static char buffer[128];
char* bp = buffer;
*bp = 0; // means return will be valid even if nbytes is 0.
for(int i = 0; i < nbytes; i++)
{
if (i > 0) {
*bp = ','; bp++;
}
// sprintf can have errors, so probably want to check for a +ve
// result.
bp += sprintf(bp, "%d", bytes[i])
}
return buffer;
}
an implementation, assuming that timer is an array (else, size would have to be passed as a parameter) with the special handling of the comma.
Basically, print the integer in a temp buffer, then concatenate to the final buffer. Pepper with commas where needed.
The size of the output buffer isn't tested, mind.
#include <stdio.h>
#include <strings.h>
typedef unsigned char byte;
int main()
{
byte timer[4] = {1,5,23,120};
int i;
char buf[30] = "";
int first_item = 1;
for (i=0;i<sizeof(timer)/sizeof(timer[0]);i++)
{
char t[10];
if (!first_item)
{
strcat(buf,",");
}
first_item = 0;
sprintf(t,"%d",timer[i]);
strcat(buf,t);
}
printf(buf);
}

Read/write binary object as hex?

I need to serialize various structs to a file.
If possible I'd like the files to be pure ASCII. I could write some kind of serializer for each struct, but there are hundreds and many contain floats and doubles which I'd like to represent accurately.
I can't use a third-party serialization library and I don't have the time to write hundreds of serializiers.
How can I ASCII-safe serialize this data?
Also streams please, I hate the look of C-style printf("%02x",data).
I found this solution online and it addresses just this problem:
https://jdale88.wordpress.com/2009/09/24/c-anything-tofrom-a-hex-string/
Reproduced below:
#include <string>
#include <sstream>
#include <iomanip>
// ------------------------------------------------------------------
/*!
Convert a block of data to a hex string
*/
void toHex(
void *const data, //!< Data to convert
const size_t dataLength, //!< Length of the data to convert
std::string &dest //!< Destination string
)
{
unsigned char *byteData = reinterpret_cast<unsigned char*>(data);
std::stringstream hexStringStream;
hexStringStream << std::hex << std::setfill('0');
for(size_t index = 0; index < dataLength; ++index)
hexStringStream << std::setw(2) << static_cast<int>(byteData[index]);
dest = hexStringStream.str();
}
// ------------------------------------------------------------------
/*!
Convert a hex string to a block of data
*/
void fromHex(
const std::string &in, //!< Input hex string
void *const data //!< Data store
)
{
size_t length = in.length();
unsigned char *byteData = reinterpret_cast<unsigned char*>(data);
std::stringstream hexStringStream; hexStringStream >> std::hex;
for(size_t strIndex = 0, dataIndex = 0; strIndex < length; ++dataIndex)
{
// Read out and convert the string two characters at a time
const char tmpStr[3] = { in[strIndex++], in[strIndex++], 0 };
// Reset and fill the string stream
hexStringStream.clear();
hexStringStream.str(tmpStr);
// Do the conversion
int tmpValue = 0;
hexStringStream >> tmpValue;
byteData[dataIndex] = static_cast<unsigned char>(tmpValue);
}
}
This can be easily adapted to read/write to file streams, although the stringstream used in fromHex is still necessary, the conversion must be done two read characters at a time.
Any way you do it, you're going to need serialization code for
each struct type. You can't just bit-copy a struct to the
external world, and expect it to work.
And if you want pure ascii, don't bother with hex. For
serializing float and double, set the output stream to
scientific, and the precision to 8 for float, and 16 for
double. (It will take a few more bytes, but it will actually
work.)
For the rest: if the struct are written cleanly, according to
some in house programming guidelines, and only contain basic
types, you should be able to parse them directly. Otherwise,
the simplest solution is generally to design a very simple
descriptor language, describe each struct in it, and run a code
generator over it to get the serialization code.

Error when running g++ app. (encryption of string)

I'm trying to encrypt and decrypt files with C++, using this code:
#include <iostream>
void crypt(char* pData, unsigned int lenData, const char* pKey, unsigned int lenKey)
{
for (unsigned int i = 0; i < lenData; i++)
pData[i] = pData[i] ^ pKey[i % lenKey];
}
int main()
{
char* data = (char*)"any binary string here";
crypt(data, 22, "key", 3);
std::cout << data;
}
I'm compiling with g++ (tdm-1) 4.5.1 (MinGW) on Windows 6.1 (Seven), it compiles with no errors or warnings. When I try to run, it shows a window with "app.exe stoped working. The Windows can check online if there has some solution to the problem." (some thing like that, my Windows isn't in English). I don't have any idea about what is wrong.
You're trying to modify a string constant. For obvious reasons (it's constant), this won't work. Instead, do this:
int main()
{
char data[] = "any binary string here";
crypt(data, 22, "key", 3);
std::cout << data;
}
This line is wrong:
char* data = (char*)"any binary string here";
First, you should not use a cast. Next, a string literal is a constant. So it should be:
const char* data = "any binary string here";
But you're wanting to overwrite it. So you need a string that isn't a constant. Like this:
char data[] = "any binary string here";
Mike has answered this question well. You cannot modify constant string literals. Time of DOS has almost ended. Proper up-to-date production level C++ compiler should have issued a warning with appropriate flags. Just to add a little bit to the Mike's answer, here is a good explanation of constant string literals - http://msdn.microsoft.com/en-us/library/69ze775t(v=vs.80).aspx
Also, here is the better way to do it:
#include <iostream>
void crypt(char* pData, unsigned int lenData, const char* pKey, unsigned int lenKey)
{
for (unsigned int i = 0; i < lenData; ++i)
pData[i] ^= pKey[i % lenKey];
}
int main()
{
char data[] = "any binary string here";
const char key[] = "key";
crypt (data, sizeof(data) - 1, key, sizeof (key) - 1);
std::cout << data << std::endl;
}
Note post-increment operator, ^= and sizeof operators. For simple types compiler will do this micro-optimization for you, but developing a good habit is good. If you have a complex iterator, using post-increment can harm you on performance critical paths. Also, hardcoding size of strings is error-prone. Later you or someone else can change the string and forget to change its length. Not to mention that every time you have to go and count number of characters.
Happy coding!

Outputting bit data to binary file C++

I am writing a compression program, and need to write bit data to a binary file using c++. If anyone could advise on the write statement, or a website with advice, I would be very grateful.
Apologies if this is a simple or confusing question, I am struggling to find answers on web.
Collect the bits into whole bytes, such as an unsigned char or std::bitset (where the bitset size is a multiple of CHAR_BIT), then write whole bytes at a time. Computers "deal with bits", but the available abstraction – especially for IO – is that you, as a programmer, deal with individual bytes. Bitwise manipulation can be used to toggle specific bits, but you're always handling byte-sized objects.
At the end of the output, if you don't have a whole byte, you'll need to decide how that should be stored. Both iostreams and stdio can write unformatted data using ostream::write and fwrite, respectively.
Instead of a single char or bitset<8> (8 being the most common value for CHAR_BIT), you might consider using a larger block size, such as an array of 4-32, or more, chars or the equivalent sized bitset.
For writing binary, the trick I have found most helpful is to store all the binary as a single array in memory and then move it all over to the hard drive. Doing a bit at a time, or a byte at a time, or an unsigned long long at a time is not as fast as having all the data stored in an array and using one instance of "fwrite()" to store it to the hard drive.
size_t fwrite ( const void * ptr, size_t size, size_t count, FILE * stream );
Ref: http://www.cplusplus.com/reference/clibrary/cstdio/fwrite/
In English:
fwrite( [array* of stored data], [size in bytes of array OBJECT. For unsigned chars -> 1, for unsigned long longs -> 8], [number of instances in array], [FILE*])
Always check your returns for validation of success!
Additionally, an argument can be made that having the object type be as large as possible is the fastest way to go ([unsigned long long] > [char]). While I am not versed in the coding behind "fwrite()", I feel the time to convert from the natural object used in your code to [unsigned long long] will take more time when combined with the writing than the "fwrite()" making due with what you have.
Back when I was learning Huffman Coding, it took me a few hours to realize that there was a difference between [char] and [unsigned char]. Notice for this method that you should always use unsigned variables to store the pure binary.
by below class you can write and read bit by bit
class bitChar{
public:
unsigned char* c;
int shift_count;
string BITS;
bitChar()
{
shift_count = 0;
c = (unsigned char*)calloc(1, sizeof(char));
}
string readByBits(ifstream& inf)
{
string s ="";
char buffer[1];
while (inf.read (buffer, 1))
{
s += getBits(*buffer);
}
return s;
}
void setBITS(string X)
{
BITS = X;
}
int insertBits(ofstream& outf)
{
int total = 0;
while(BITS.length())
{
if(BITS[0] == '1')
*c |= 1;
*c <<= 1;
++shift_count;
++total;
BITS.erase(0, 1);
if(shift_count == 7 )
{
if(BITS.size()>0)
{
if(BITS[0] == '1')
*c |= 1;
++total;
BITS.erase(0, 1);
}
writeBits(outf);
shift_count = 0;
free(c);
c = (unsigned char*)calloc(1, sizeof(char));
}
}
if(shift_count > 0)
{
*c <<= (7 - shift_count);
writeBits(outf);
free(c);
c = (unsigned char*)calloc(1, sizeof(char));
}
outf.close();
return total;
}
string getBits(unsigned char X)
{
stringstream itoa;
for(unsigned s = 7; s > 0 ; s--)
{
itoa << ((X >> s) & 1);
}
itoa << (X&1) ;
return itoa.str();
}
void writeBits(ofstream& outf)
{
outf << *c;
}
~bitChar()
{
if(c)
free(c);
}
};
for example
#include <iostream>
#include <sstream>
#include <fstream>
#include <string>
#include <stdlib.h>
using namespace std;
int main()
{
ofstream outf("Sample.dat");
ifstream inf("Sample.dat");
string enCoded = "101000001010101010";
//write to file
cout << enCoded << endl ; //print 101000001010101010
bitChar bchar;
bchar.setBITS(enCoded);
bchar.insertBits(outf);
//read from file
string decoded =bchar.readByBits(inf);
cout << decoded << endl ; //print 101000001010101010000000
return 0;
}