How is this Checksum calculated? - c++

Good day friends.
I'm using Qt C++ and I've ran into a issue calculating a checksum for a serial communication protocol. I'm very new to serial programming and this is a little above my knowledge atm
Per their documentation.
Communication format
Head Address CID Datalength Data Check Tail
0x7E 0x01~0x0e 0x01 - - checksum 0x0D
the head never changes, the address can change, the CID is a command
I don't understand how the data length is to be calculated
the checksum I'm not clear on how they calculate it. Are they taking the first 5 bytes then calculating the checksum then adding the tail ? or only the "command" which to me is the CID byte. but that doesn't make sense.
The function they use to calculate the checksum
byte check(byte* buf, byte len)
{
byte i, chk= 0;
int sum = 0;
for(i = 0; i < len; i++)
{
chk ^= buf[i];
sum += buf[i];
}
return ((chk^sum)&0xFF);
}
Which I Wrote in Qt as
unsigned char MainWindow::Checksum_Check(char* buf, char len)
{
unsigned char i, chk = 0;
int sum = 0;
for(i = 0; i < len; i++)
{
chk ^= buf[i];
sum += buf[i];
}
return ((chk^sum)&0xFF);
}
So sending a hexidecimal command using QBytearray
QByteArray Chunk("\x7E\x01\x01\x00");
char *TestStr = Chunk.data();
unsigned char Checksum = Checksum_Check(TestStr, Chunk.length()); // tried sizeof(char) even tried sizeof(Chunk.size())
const char cart[] = {'\x7E', '\x01', '\x01', '\x00', checksum, '\x0D'};
QByteArray ba4(QByteArray::fromRawData(cart, 6));
SerialPort->write(ba4.toHex()); // Tried write(ba4); as well.
SerialPort->waitForBytesWritten(1000);
What i get in qDebug() is
Checksum "FE"
Command "7E010100FE0D"
Which looks correct but the device ignores the request.
My question is. Is the checksum calculated correctly by me or am I missing something crucial ?
Any advice or help would be most welcome as I am stuck.
Ive checked it with one of their example commands:
7E 01 F2 02 FF FF FE 0D
which if i do this:
QByteArray Chunk("\x7E\x01\xF2\x02\xFF\xFF");
char *TestStr = Chunk.data();
unsigned char Checksum = Checksum_Check(TestStr, Chunk.length());
const char cart[] = {'\x7E', '\x01', '\xF2', '\x02','\xFF', '\xFF', Checksum, '\x0D'};
QByteArray ba4(QByteArray::fromRawData(cart, 8));
QString hexvalue;
hexvalue = QString("%1").arg(CRC, 0, 16, QLatin1Char( '0' ));
qDebug() << "Checksum " << hexvalue.toUpper();
qDebug() << "Command " << ba4.toHex().toUpper();
SerialPort->write(ba4.toHex());
Gives me
Checksum "FE"
Command "7E01F202FFFFFE0D"
which is correct. By the looks of things. But the device still doesn't respond
So I just wanna verify that the checksum is in fact generated correctly.
Also in the documentation is an example
PC software Command:
buf[0] = 0x7E; //head
buf[1] = 0x01; //addr
buf[2] = 0x01; //CID
buf[3] = 0x00; //data length
buf[4] = CHK; //Check code
buf[5] = 0x0D; //tail
And here is another question. How can a checksum buf[4] be generated while the ByteArray is still being constructed. It doesn't make sense to me at this point. I dont know/understand which bytes they use for the checksum
Anyone? thanks

Related

Issues with calculating CRC16/MCRF4XX

I would like some help as I seem to be unable to calculate the CRC for a single byte, let alone an array of bytes.
Below is the test code that I have been trying to get work. Please note that I used the code from this post, and the example with the array of chars works, but I need to be able to adapt this to work for an array of uint8_t.
When going through the fundamentals of CRC, this seems like it should work but I may be mistaken.
I have been trying to validate the function by checking the outputted remainder (crc) using this site.
If someone could kindly identify where the issue is coming round which is leading to the wrong CRC value being calculated, and educate me as to why this is happening, that would be highly appreciated!
#include <cstdint>
#include <iostream>
// Below 2 functions take from
// https://stackoverflow.com/questions/59486262/issues-calculating-crc16-mcrf4xx-for-more-than-1-byte
int16_t Utils_CRC16_MCRF4XX( uint16_t Crc, uint8_t Byte )
{
Crc ^= Byte ;
for( uint8_t i = 0; i < 8; i++ )
{
Crc = (Crc & 0x0001) != 0 ? (Crc >> 1) ^ 0x8408 :
Crc >> 1 ;
}
return Crc ;
}
uint16_t crc( uint8_t* pData, uint32_t Len )
{
uint16_t Crc = 0xffff;
for( uint32_t i = 0; i < Len; i++ )
{
Crc = Utils_CRC16_MCRF4XX( Crc, pData[i] );
}
return (Crc);
}
int main()
{
int8_t val1 = 30;
int8_t val2 = 30;
// Arbitrary Test Message
uint8_t message[] =
{
(uint8_t)54
};
/*
// Sample Test Message (Actually what I intend to transmit)
uint8_t message[] =
{
(uint8_t)250,
(uint8_t)7,
(uint8_t)21,
(uint8_t)2,
(uint8_t)2,
(uint8_t)val1,
(uint8_t)val2
};
*/
//uint8_t test[] = "123456789";
uint16_t remainder = crc(message, sizeof(message));
// Expected DEC: 28561
// Actual DEC: 23346
std::cout << "CRC: " << remainder << std::endl;
return 0;
}
If you do this:
uint8_t test[] = "123456789";
uint16_t remainder = crc(test, sizeof(test)-1);
then you get the expected result, 28561.
You need to subtract 1 from sizeof(test) to exclude the terminating 0.

Failing to fill a buffer for a *.wav file using two different frequencies

This was solved by changing the buffer from int16_t to int8_t since I was trying to write 8bit audio.
I'm trying to fill a buffer for a mono wave file with two different frequencies but failing at it. I'm using CLion in Ubuntu 18.04.
I know, the buffer size is equal to duration*sample_rate so I'm creating a int16_t vector with that size. I tried filling it with one note first.
for(int i = 0; i < frame_total; i++)
audio[i] = static_cast<int16_t>(128 + 127 * sin(i));
which generated a nice long beeep. And then I changed it with:
for(int i = 0; i < frame_total; i++)
audio[i] = static_cast<int16_t>(128 + 127 * sin(i*2));
which generated a higher beeeep, but when trying to do the following:
for(int i = 0; i < frame_total/2; i++)
audio[i] = static_cast<int16_t>(128 + 127 * sin(i*2));
for(int i = frame_total/2; i < frame_total; i++)
audio[i] = static_cast<int16_t>(128 + 127 * sin(i));
I expect it to write the higher beep in the first half of the audio, and fill the another fall with the "normal" beep. The *.wav file just plays the first note the entire time.
#define FORMAT_AUDIO 1
#define FORMAT_SIZE 16
struct wave_header{
// Header
char riff[4];
int32_t file_size;
char wave[4];
// Format
char fmt[4];
int32_t format_size;
int16_t format_audio;
int16_t num_channels;
int32_t sample_rate;
int32_t byte_rate;
int16_t block_align;
int16_t bits_per_sample;
// Data
char data[4]
int32_t data_size;
};
void write_header(ofstream &music_file ,int16_t bits, int32_t samples, int32_t duration){
wave_header wav_header{};
int16_t channels_quantity = 1;
int32_t total_data = duration * samples * channels_quantity * bits/8;
int32_t file_data = 4 + 8 + FORMAT_SIZE + 8 + total_data;
wav_header.riff[0] = 'R';
wav_header.riff[1] = 'I';
wav_header.riff[2] = 'F';
wav_header.riff[3] = 'F';
wav_header.file_size = file_data;
wav_header.wave[0] = 'W';
wav_header.wave[1] = 'A';
wav_header.wave[2] = 'V';
wav_header.wave[3] = 'E';
wav_header.fmt[0] = 'f';
wav_header.fmt[1] = 'm';
wav_header.fmt[2] = 't';
wav_header.fmt[3] = ' ';
wav_header.format_size = FORMAT_SIZE;
wav_header.format_audio = FORMAT_AUDIO;
wav_header.num_channels = channels_quantity;
wav_header.sample_rate = samples;
wav_header.byte_rate = samples * channels_quantity * bits/8;
wav_header.block_align = static_cast<int16_t>(channels_quantity * bits / 8);
wav_header.bits_per_sample = bits;
wav_header.data[0] = 'd';
wav_header.data[1] = 'a';
wav_header.data[2] = 't';
wav_header.data[3] = 'a';
wav_header.data_size = total_data;
music_file.write((char*)&wav_header, sizeof(wave_header));
}
int main(int argc, char const *argv[]) {
int16_t bits = 8;
int32_t samples = 44100;
int32_t duration = 4;
ofstream music_file("music.wav", ios::out | ios::binary);
int32_t frame_total = samples * duration;
auto* audio = new int16_t[frame_total];
for(int i = 0; i < frame_total/2; i++)
audio[i] = static_cast<int16_t>(128 + 127 * sin(i*2));
for(int i = frame_total/2; i < frame_total; i++)
audio[i] = static_cast<int16_t>(128 + 127 * sin(i));
write_header(music_file, bits, samples, duration);
music_file.write(reinterpret_cast<char*>(audio),sizeof(int16_t)*frame_total);
return 0;
}
There are two major issues with your code.
The first one is that you may be writing an invalid header depending on your compiler settings and environment.
The reason is that the wave_header struct is not packed in memory, and may contain padding between members. Therefore, when you do:
music_file.write((char*)&wav_header, sizeof(wave_header));
It may write something that isn't a valid WAV header. Even if you are lucky enough to get exactly what you wanted, it is a good idea to fix it, because it may change at any moment and surely it isn't portable.
The second issue is that the call to write the actual wave:
music_file.write(reinterpret_cast<char*>(audio),sizeof(char)*frame_total);
Is writing exactly half the amount of data you are expecting. The actual size of the data pointed by audio is sizeof(int16_t) * frame_total.
This explains why you are only hearing the first part of the wave you wrote.
This was solved by changing the buffer (audio) from int16_t to int8_t since I was trying to write 8bit audio.

Bitwise operator to calculate checksum

Am trying to come up with a C/C++ function to calculate the checksum of a given array of hex values.
char *hex = "3133455D332015550F23315D";
For e.g., the above buffer has 12 bytes and then last byte is the checksum.
Now what needs to done is, convert the 1st 11 individual bytes to decimal and then take there sum.
i.e., 31 = 49,
33 = 51,.....
So 49 + 51 + .....................
And then convert this decimal value to Hex. And then take the LSB of that hex value and convert that to binary.
Now take the 2's complement of this binary value and convert that to hex. At this step, the hex value should be equal to 12th byte.
But the above buffer is just an example and so it may not be correct.
So there're multiple steps involved in this.
Am looking for an easy way to do this using bitwise operators.
I did something like this, but it seems to take the 1st 2 bytes and doesn't give me the right answer.
int checksum (char * buffer, int size){
int value = 0;
unsigned short tempChecksum = 0;
int checkSum = 0;
for (int index = 0; index < size - 1; index++) {
value = (buffer[index] << 8) | (buffer[index]);
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFFFF) + 1) & 0xFFFF;
}
I couldn't get this logic to work. I don't have enough embedded programming behind me to understand the bitwise operators. Any help is welcome.
ANSWER
I got this working with below changes.
for (int index = 0; index < size - 1; index++) {
value = buffer[index];
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFF) + 1) & 0xFF;
Using addition to obtain a checksum is at least weird. Common checksums use bitwise xor or full crc. But assuming it is really what you need, it can be done easily with unsigned char operations:
#include <stdio.h>
char checksum(const char *hex, int n) {
unsigned char ck = 0;
for (int i=0; i<n; i+=1) {
unsigned val;
int cr = sscanf(hex + 2 * i, "%2x", &val); // convert 2 hexa chars to a byte value
if (cr == 1) ck += val;
}
return ck;
}
int main() {
char hex[] = "3133455D332015550F23315D";
char ck = checksum(hex, 11);
printf("%2x", (unsigned) (unsigned char) ck);
return 0;
}
As the operation are made on an unsigned char everything exceeding a byte value is properly discarded and you obtain your value (26 in your example).

Base64 image file encoding with C++

I am writing some simple code to encode files to base64. I have a short c++ code that reads a file into a vector and converts it to unsigned char*. I do this so I can properly use the encoding function I got.
The problem: It works with text files (of different sizes), but it won't work with image files. And I can't figure it out why. What gives?
For an simple text.txt containing the text abcd, the output for both my code and a bash $( base64 text.txt ) is the same.
On the other hand, when I input an image the output is something like iVBORwOKGgoAAAAAAA......AAA== or sometimes it ends with an corrupted size vs prev_size Aborted (core dumped), the first few bytes are correct.
The code:
static std::vector<char> readBytes(char const* filename)
{
std::ifstream ifs(filename, std::ios::binary|std::ios::ate);
std::ifstream::pos_type pos = ifs.tellg();
std::vector<char> result(pos);
ifs.seekg(0, std::ios::beg);
ifs.read(&result[0], pos);
return result;
}
static char Base64Digits[] =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
int ToBase64Simple( const BYTE* pSrc, int nLenSrc, char* pDst, int nLenDst )
{
int nLenOut= 0;
while ( nLenSrc > 0 ) {
if (nLenOut+4 > nLenDst) {
cout << "error\n";
return(0); // error
}
// read three source bytes (24 bits)
BYTE s1= pSrc[0]; // (but avoid reading past the end)
BYTE s2= 0; if (nLenSrc>1) s2=pSrc[1]; //------ corrected, thanks to jprichey
BYTE s3= 0; if (nLenSrc>2) s3=pSrc[2];
DWORD n;
n = s1; // xxx1
n <<= 8; // xx1x
n |= s2; // xx12
n <<= 8; // x12x
n |= s3; // x123
//-------------- get four 6-bit values for lookups
BYTE m4= n & 0x3f; n >>= 6;
BYTE m3= n & 0x3f; n >>= 6;
BYTE m2= n & 0x3f; n >>= 6;
BYTE m1= n & 0x3f;
//------------------ lookup the right digits for output
BYTE b1 = Base64Digits[m1];
BYTE b2 = Base64Digits[m2];
BYTE b3 = Base64Digits[m3];
BYTE b4 = Base64Digits[m4];
//--------- end of input handling
*pDst++ = b1;
*pDst++ = b2;
if ( nLenSrc >= 3 ) { // 24 src bits left to encode, output xxxx
*pDst++ = b3;
*pDst++ = b4;
}
if ( nLenSrc == 2 ) { // 16 src bits left to encode, output xxx=
*pDst++ = b3;
*pDst++ = '=';
}
if ( nLenSrc == 1 ) { // 8 src bits left to encode, output xx==
*pDst++ = '=';
*pDst++ = '=';
}
pSrc += 3;
nLenSrc -= 3;
nLenOut += 4;
}
// Could optionally append a NULL byte like so:
*pDst++= 0; nLenOut++;
return( nLenOut );
}
int main(int argc, char* argv[])
{
std::vector<char> mymsg;
mymsg = readBytes(argv[1]);
char* arr = &mymsg[0];
int len = mymsg.size();
int lendst = ((len+2)/3)*4;
unsigned char* uarr = (unsigned char *) malloc(len*sizeof(unsigned char));
char* dst = (char *) malloc(lendst*sizeof(char));;
mymsg.clear(); //free()
// convert to unsigned char
strncpy((char*)uarr, arr, len);
int lenOut = ToBase64Simple(uarr, len, dst, lendst);
free(uarr);
int cont = 0;
while (cont < lenOut) //(dst[cont] != 0)
cout << dst[cont++];
cout << "\n";
}
Any insight is welcomed.
I see two problems.
First, you are clearing your mymsg vector before you're done using it. This leaves the arr pointer dangling (pointing at memory that is no longer allocated). When you access arr to get the data out, you end up with Undefined Behavior.
Then you use strncpy to copy (potentially) binary data. This copy will stop when it reaches the first nul (0) byte within the file, so not all of your data will be copied. You should use memcpy instead.

CRC24Q implementation

I am trying to implement the algorithm of a CRC check, which basically created a value, based on an input message.
So, consider I have a hex message 3F214365876616AB15387D5D59, and I want to obtain the CRC24Q value of the message.
The algorithm that I found to do this is the following:
typedef unsigned long crc24;
crc24 crc_check(unsigned char *input) {
unsigned char *octets;
crc24 crc = 0xb704ce; // CRC24_INIT;
int i;
int len = strlen(input);
octets = input;
while (len--) {
crc ^= ((*octets++) << 16);
for (i = 0; i < 8; i++) {
crc <<= 1;
if (crc & 0x1000000)
crc ^= CRC24_POLY;
}
}
return crc & 0xFFFFFF;
}
where *input=3F214365876616AB15387D5D59.
The problem is that ((*octets++) << 16) will shift by 16 bits the ascii value of the hex character and not the character itself.
So, I made a function to convert the hex numbers to characters.
I know the implementation looks weird, and I wouldn't be surprised if it were wrong.
This is the convert function:
char* convert(unsigned char* message) {
unsigned char* input;
input = message;
int p;
char *xxxx[20];
xxxx[0]="";
for (p = 0; p < length(message) - 1; p = p + 2) {
char* pp[20];
pp[0] = input[0];
char *c[20];
*input++;
c[0]= input[0];
*input++;
strcat(pp,c);
char cc;
char tt[2];
cc = (char ) strtol(pp, &pp, 16);
tt[0]=cc;
strcat(xxxx,tt);
}
return xxxx;
}
SO:
unsigned char *msg_hex="3F214365876616AB15387D5D59";
crc_sum = crc_check(convert((msg_hex)));
printf("CRC-sum: %x\n", crc_sum);
Thank you very much for any suggestions.
Shouldn't the if (crc & 0x8000000) be if (crc & 0x1000000) otherwise you're testing the 28th bit not the 25th for 24-bit overflow