CRC24Q implementation - crc

I am trying to implement the algorithm of a CRC check, which basically created a value, based on an input message.
So, consider I have a hex message 3F214365876616AB15387D5D59, and I want to obtain the CRC24Q value of the message.
The algorithm that I found to do this is the following:
typedef unsigned long crc24;
crc24 crc_check(unsigned char *input) {
unsigned char *octets;
crc24 crc = 0xb704ce; // CRC24_INIT;
int i;
int len = strlen(input);
octets = input;
while (len--) {
crc ^= ((*octets++) << 16);
for (i = 0; i < 8; i++) {
crc <<= 1;
if (crc & 0x1000000)
crc ^= CRC24_POLY;
}
}
return crc & 0xFFFFFF;
}
where *input=3F214365876616AB15387D5D59.
The problem is that ((*octets++) << 16) will shift by 16 bits the ascii value of the hex character and not the character itself.
So, I made a function to convert the hex numbers to characters.
I know the implementation looks weird, and I wouldn't be surprised if it were wrong.
This is the convert function:
char* convert(unsigned char* message) {
unsigned char* input;
input = message;
int p;
char *xxxx[20];
xxxx[0]="";
for (p = 0; p < length(message) - 1; p = p + 2) {
char* pp[20];
pp[0] = input[0];
char *c[20];
*input++;
c[0]= input[0];
*input++;
strcat(pp,c);
char cc;
char tt[2];
cc = (char ) strtol(pp, &pp, 16);
tt[0]=cc;
strcat(xxxx,tt);
}
return xxxx;
}
SO:
unsigned char *msg_hex="3F214365876616AB15387D5D59";
crc_sum = crc_check(convert((msg_hex)));
printf("CRC-sum: %x\n", crc_sum);
Thank you very much for any suggestions.

Shouldn't the if (crc & 0x8000000) be if (crc & 0x1000000) otherwise you're testing the 28th bit not the 25th for 24-bit overflow

Related

How to modify crc-32 to crc-32/mpeg-2

I am trying to code a function to match CRC 32 output from a device to the actual CRC-32 sum that I calculate. Following is my code:
#include <iostream>
#include <string.h>
#define CRC32_POLYNOMIAL 0xEDB88320
using namespace std;
unsigned int crc32b(unsigned char *message,size_t l)
{
int i, j;
unsigned int byte, crc, mask;
i = 0;
crc = 0xFFFFFFFF;
while (i<l) {
byte = message[i]; // Get next byte.
crc = crc ^ byte;
for (j = 7; j >= 0; j--) { // Do eight times.
mask = -(crc & 1);
crc = (crc >> 1) ^ (0xEDB88320 & mask);
}
i = i + 1;
}
return ~crc;
}
int main()
{
unsigned char Buff[] = {0x91,0xFF,0xFC,0xEA,0xFF,0xFF,0x70,0xFF,0xFD,0x87,0x00,0xFF,0xF9,0x1B,0xFF,0xF3,0x4E,0x00,0xFB,0x00,0x00,0x02,0x01,0xFB};
unsigned long CRC = crc32b((unsigned char *)Buff,24);
cout << hex << CRC <<endl;
getchar();
return 0;
}
This gives me the 32 bit CRC output of following payload:
91FFFCEAFFFF70FFFD8700FFF91BFFF34E00FB00000201FB
as 1980AC80. However the device is giving the checksum as 8059143D.
Upon further inspection using online CRC calculators I found that the device is sending out CRC-32/MPEG-2 checksum value. (Can be verified here). I have browsed multiple sites but did not find any straight forward implementation of CRC32/MPEG2 which I can integrate in my code. Can anyone help?
As noted in the crcalc web page, crc32/mpeg2 uses a left shifting (not reflected) CRC along with the CRC polynomial 0x104C11DB7 and initial CRC value of 0xFFFFFFFF, and not post complemented:
unsigned int crc32b(unsigned char *message, size_t l)
{
size_t i, j;
unsigned int crc, msb;
crc = 0xFFFFFFFF;
for(i = 0; i < l; i++) {
// xor next byte to upper bits of crc
crc ^= (((unsigned int)message[i])<<24);
for (j = 0; j < 8; j++) { // Do eight times.
msb = crc>>31;
crc <<= 1;
crc ^= (0 - msb) & 0x04C11DB7;
}
}
return crc; // don't complement crc on output
}

Convert unsigned char array of characters to int C++

How can I convert an unsigned char array that contains letters into an integer. I have tried this so for but it only converts up to four bytes. I also need a way to convert the integer back into the unsigned char array .
int buffToInteger(char * buffer)
{
int a = static_cast<int>(static_cast<unsigned char>(buffer[0]) << 24 |
static_cast<unsigned char>(buffer[1]) << 16 |
static_cast<unsigned char>(buffer[2]) << 8 |
static_cast<unsigned char>(buffer[3]));
return a;
}
It looks like you're trying to use a for loop, i.e. repeating a task over and over again, for an in-determinant amount of steps.
unsigned int buffToInteger(char * buffer, unsigned int size)
{
// assert(size <= sizeof(int));
unsigned int ret = 0;
int shift = 0;
for( int i = size - 1; i >= 0, i-- ) {
ret |= static_cast<unsigned int>(buffer[i]) << shift;
shift += 8;
}
return ret;
}
What I think you are going for is called a hash -- converting an object to a unique integer. The problem is a hash IS NOT REVERSIBLE. This hash will produce different results for hash("WXYZABCD", 8) and hash("ABCD", 4). The answer by #Nicholas Pipitone DOES NOT produce different outputs for these different inputs.
Once you compute this hash, there is no way to get the original string back. If you want to keep knowledge of the original string, you MUST keep the original string as a variable.
int hash(char* buffer, size_t size) {
int res = 0;
for (size_t i = 0; i < size; ++i) {
res += buffer[i];
res *= 31;
}
return res;
}
Here's how to convert the first sizeof(int) bytes of the char array to an int:
int val = *(unsigned int *)buffer;
and to convert in back:
*(unsigned int *)buffer = val;
Note that your buffer must be at least the length of your int type size. You should check for this.

Bitwise operator to calculate checksum

Am trying to come up with a C/C++ function to calculate the checksum of a given array of hex values.
char *hex = "3133455D332015550F23315D";
For e.g., the above buffer has 12 bytes and then last byte is the checksum.
Now what needs to done is, convert the 1st 11 individual bytes to decimal and then take there sum.
i.e., 31 = 49,
33 = 51,.....
So 49 + 51 + .....................
And then convert this decimal value to Hex. And then take the LSB of that hex value and convert that to binary.
Now take the 2's complement of this binary value and convert that to hex. At this step, the hex value should be equal to 12th byte.
But the above buffer is just an example and so it may not be correct.
So there're multiple steps involved in this.
Am looking for an easy way to do this using bitwise operators.
I did something like this, but it seems to take the 1st 2 bytes and doesn't give me the right answer.
int checksum (char * buffer, int size){
int value = 0;
unsigned short tempChecksum = 0;
int checkSum = 0;
for (int index = 0; index < size - 1; index++) {
value = (buffer[index] << 8) | (buffer[index]);
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFFFF) + 1) & 0xFFFF;
}
I couldn't get this logic to work. I don't have enough embedded programming behind me to understand the bitwise operators. Any help is welcome.
ANSWER
I got this working with below changes.
for (int index = 0; index < size - 1; index++) {
value = buffer[index];
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFF) + 1) & 0xFF;
Using addition to obtain a checksum is at least weird. Common checksums use bitwise xor or full crc. But assuming it is really what you need, it can be done easily with unsigned char operations:
#include <stdio.h>
char checksum(const char *hex, int n) {
unsigned char ck = 0;
for (int i=0; i<n; i+=1) {
unsigned val;
int cr = sscanf(hex + 2 * i, "%2x", &val); // convert 2 hexa chars to a byte value
if (cr == 1) ck += val;
}
return ck;
}
int main() {
char hex[] = "3133455D332015550F23315D";
char ck = checksum(hex, 11);
printf("%2x", (unsigned) (unsigned char) ck);
return 0;
}
As the operation are made on an unsigned char everything exceeding a byte value is properly discarded and you obtain your value (26 in your example).

Convert unsigned short to char

I know there's other posts like this but I think mine is different. I have a group of numbers I wish to display. However, it's saved as unsigned short. As it's given to me from a network buffer, all my data is unsigned short format. So for a serial number starting with "ABC-" the two unsigned shorts will be holding 0x4142 and 0x432D (Already in ASCII format). I need to convert those to type char to display using printf and %s, but for the rest of my system, they need to remain as an unsigned short. This is what I've tried so far, but the output is blank:
unsigned char * num[3];
num[0] = (unsigned char*)(SYSTEM_N >> 8);
num[1] = (unsigned char*)(SYSTEM_N & 0x00FF);
printf("System Number: %s \r\n", num);
Can anyone shed some light on this for me? Thanks!
There are several errors: 1) array is too short, 2) defining the array as an array of pointers and 3) not terminating the string.
#include <stdio.h>
int main (void)
{
unsigned short system_m = 0x4142;
unsigned short system_n = 0x432D;
unsigned char num[5];
num[0] = system_m >> 8;
num[1] = system_m & 0xFF;
num[2] = system_n >> 8;
num[3] = system_n & 0xFF;
num[4] = '\0';
printf("System Number: %s \r\n", num);
return 0;
}
EDIT alternatively if you don't want to keep the string, just display the information with this:
#include <stdio.h>
int main (void)
{
unsigned short system_m = 0x4142;
unsigned short system_n = 0x432D;
printf("System Number: %c%c%c%c \r\n",
system_m >> 8, system_m & 0xFF, system_n >> 8, system_n & 0xFF);
return 0;
}
Program output:
System Number: ABC-
You probably meant to write
unsigned char num[3];
as you have it, you declare an array holding three char* pointers.
Also don't forget to set the closing NUL character, before printing:
num[2] = '\0';
A general solution might be something like this, assuming ushort_arr contains the unsigned shorts in an array and ushort_arr_size indicates its size.
char *str = malloc(ushort_arr_size * 2 + 1);
// check if str == NULL
int j = 0;
for (int i = 0; i < ushort_arr_size; i++) {
str[j++] = ushort_arr[i] >> 8;
str[j++] = ushort_arr[i];
}
str[j] = '\0';
printf("string is: %s\n", str);
free(str);
Though perhaps this might be more effective without the memory management, if you only want to print it once:
for (int i = 0, j = 0; i < ushort_arr_size; i++) {
putchar(ushort_arr[i] >> 8);
putchar(ushort_arr[i] & 0xFF);
}

C++ Bytes To Bits Conversion And Then Print

Code Taken From: Bytes to Binary in C Credit: BSchlinker
The following code I modified to take more than 1 Byte at a time. I modified it, and got it half working and then got really confused on my loops. :( Ive spent the last day and a half trying to figure it out... but my C++ skills are not really that good (still learning!)
#include <iostream>
using namespace std;
char show_binary(unsigned char u, unsigned char *result,int len);
int main()
{
unsigned char p40[3] = {0x40, 0x00, 0x0a};
unsigned char bits[8*(sizeof(p40))];
int c;
c=sizeof(p40);
show_binary(*p40, bits, 3);
cout << "\n\n";
cout << "BIN = ";
do{
for (int i = 0; i < 8; i++)
printf("%d",bits[i+(8*c)]);
c++;
}while(c < 3);
cout << "\n";
int a;
cin >> a;
return 0;
}
char show_binary(unsigned char u, unsigned char *result, int len)
{
unsigned char mask = 1;
unsigned char bits[8*sizeof(result)];
int a,b,c;
a=0;
b=0;
c=len;
do{
for (int i = 0; i < 8; i++)
bits[i+(8*a)] = (u[&a] & (mask << i)) != 0;
a++;
}while(a < len);
//Need to reverse it?
do{
for (int i = 8; i != -1; i--)
result[i+(8*c)] = bits[i+(8*c)];
b++;
c--;
}while(b < len);
return *result;
}
After I spit out:
cout << "BIN = ";
do{
for (int i = 0; i < 8; i++)
printf("%d",bits[i+(8*c)]);
c++;
}while(c < 3);
Id like to take bit[11] ~ bit[the end] and compute a BYTE every 8 bits. If that makes sense. But first the function should work. Any pro tips on how this should be done? And of course, rip my code apart. I like to learn.
Man, there is a lot going on in this code, so it's hard to know where to start. Suffice to say, you're trying a bit too hard. It sounds like you are trying to 1) pass in a byte array; 2) turn those bytes into a string representation of the binary; and 3) turn that string representation back into a value?
It just so happens I recently did something similar to this in C, which should still work using a C++ compiler.
#include <stdio.h>
#include <string.h>
/* A macro to get a substring */
#define substr(dest, src, dest_size, startPos, strLen) snprintf(dest, dest_size, "%.*s", strLen, src+startPos)
/* Pass in char* array of bytes, get binary representation as string in bitStr */
void str2bs(const char *bytes, size_t len, char *bitStr) {
size_t i;
char buffer[9] = "";
for(i = 0; i < len; i++) {
sprintf(buffer,
"%c%c%c%c%c%c%c%c",
(bytes[i] & 0x80) ? '1':'0',
(bytes[i] & 0x40) ? '1':'0',
(bytes[i] & 0x20) ? '1':'0',
(bytes[i] & 0x10) ? '1':'0',
(bytes[i] & 0x08) ? '1':'0',
(bytes[i] & 0x04) ? '1':'0',
(bytes[i] & 0x02) ? '1':'0',
(bytes[i] & 0x01) ? '1':'0');
strncat(bitStr, buffer, 8);
buffer[0] = '\0';
}
}
To get the string of binary back into a value it can by done with bit shifting:
unsigned char bs2uc(char *bitStr) {
unsigned char val = 0;
int toShift = 0;
int i;
for(i = strlen(bitStr)-1; i >= 0; i--) {
if(bitStr[i] == '1') {
val = (1 << toShift) | val;
}
toShift++;
}
return val;
}
Once you had a binary string you could then take substrings of any arbitrary 8 bits (or less, I guess) and turn them back into bytes.
char *bitStr; /* Let's pretend this is populated with a valid string */
char byte[9] = "";
substr(byte, bitStr, 9, 4, 8);
/* This would create a substring of length 8 starting from index 4 of bitStr */
unsigned char b = bs2uc(byte);
I've actually created a whole suite of value -> binary string -> value functions if you'd like to take a look at them. GitHub - binstr