I'm building something which uses the DS18B20 temperature sensor.
First I am trying to understand the example CRC in the Maxim application note 27, "Understanding and Using Cyclic Redundancy Checks with Maxim iButton Products" (https://www.analog.com/en/technical-articles/understanding-and-using-cyclic-redundancy-checks-with-maxim-1wire-and-ibutton-products.html).
It doesn't look too hard to code the conversion but my problem is that I cannot find any calculator that gives me the correct answer of 0xA2.
On page 5, example 2 the complete ROM code is given in hex as A2=(CRC), 00 00 00 01 B8 1C 02=(Family code).
The generator polynomial is 100110001 (X8+X5+X4+1).
On the site https://crccalc.com/ it has a CRC-8/MAXIM algorithm which has the correct generator but the RefIn and RefOut are both true whereas I cannot see anything in the application note about reversing parts (although I have tried this).
On the site https://tomeko.net/online_tools/crc8.php?lang=en it claims to implement the CRC from the application note but it gives the same answers as crccalc.com. Also note that crccalc has the same lookup table for the Maxim algorithm as the application note so no surprise the two web sites are giving the same answers.
Finally I found a site, https://www.rndtool.info/CRC-step-by-step-calculator/, that allows me to add the polynomial and bit stream in binary and it shows the 'hand' calculation of the CRC. This says nothing about input and output refs so I assume they are false. This gives different answers to the other two sites probably because of the ref values but still does not give 0xA2.
Has anyone correctly calculated the given value in the application note?
I don't want to start programming until I understand what is going on and I cannot read data from a device if I cannot decipher the CRC correctly. This is driving me mad at the moment. I've tried the number reflected and forwards with the generator reflected and forwards plus reversing the answer but I never get 0xA2.
You didn't give a language in your tags. Here is an example in C:
#include <stdio.h>
unsigned crc8maximdow(unsigned char *data, size_t len) {
unsigned crc = 0;
for (size_t i = 0; i < len; i++) {
crc ^= data[i];
for (unsigned k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ 0x8c : crc >> 1;
}
return crc;
}
int main(void) {
unsigned char data[] = {2, 0x1c, 0xb8, 1, 0, 0, 0};
printf("0x%02x\n", crc8maximdow(data, sizeof(data)));
return 0;
}
That prints 0xa2.
Using Mark's answer above I worked out what I was doing wrong with the Maxim example. Tutorials were talking about 'reflecting' the input data so I literally took the 56-bit bit pattern and reversed it which gave the wrong answer in the site https://crccalc.com/. Instead I reversed the hex numbers so that 00 00 00 01 B8 1C 02 becomes 02 1C B8 01 00 00 00 which gives the correct CRC of 0xA2 on the website. Hope this may help anyone else having the same problem.
Related
I’m working on the SPI communication between a microcontroller and the L9963E. The datasheet of the L9963E shows little information about the CRC calculation, but mentions:
a CRC of 6 bits,
a polynomial of X6 + X4 + X3 + 1 = 0b1011001
a seed of 0b111000
The documentation also mentions in the SPI protocol details that the CRC is a value between 0x00 and 0x3F, and is "calculated on the [39-7] field of the frame", see Table 22.
I'm wondering: What is meant by "field [39-7]"? The total frame length is 40bits, out of this the CRC makes up 6 bits. I would expect a CRC calculation over the remaining 34 bits, but field 39-7 would mean either 33 bits (field 7 inclusive) or 32 bits (excluding field 7).
Since I have access to a L9963E evaluation board, which includes pre-loaded firmware, I have hooked up a logic analyser. I have found the following example frames to be sent to the L9963E from the eval-board, I am assuming that these are valid, error-free frames.
0xC21C3ECEFD
0xC270080001
0xE330081064
0xC0F08C1047
0x827880800E
0xC270BFFFF9
0xC2641954BE
Could someone clear up the datasheet for me, and maybe advise me on how to implement this CRC calculation?
All of the special frames have CRC == 0:
(0000000016 xor E000000000) mod 59 = 00
(C1FCFFFC6C xor E000000000) mod 59 = 00
(C1FCFFFC87 xor E000000000) mod 59 = 00
(C1FCFFFCDE xor E000000000) mod 59 = 00
(C1FCFFFD08 xor E000000000) mod 59 = 00
and as noted by Mark Adler only one of the messages in the question works:
(C270BFFFF9 xor E000000000) mod 59 = 00
Bit ranges in datasheets like this are always inclusive.
I suspect that this is just a typo, or the person who wrote it temporarily forgot that the bits are numbered from zero.
Looking at the other bit-field boundaries in Table 19 of the document you linked it wouldn't make sense to exclude the bottom bit of the data field from the CRC, so I suspect the datasheet should say bits 39 to 6 inclusive.
There is a tool called pycrc that can generate C code to calculate a CRC with an arbitrary polynomial.
If a 40-bit message is in the low bits of uint64_t x;, then this:
x ^= (uint64_t)0x38 << 34;
for (int j = 0; j < 40; j++)
x = x & 0x8000000000 ? (x << 1) ^ ((uint64_t)0x19 << 34) : x << 1;
x &= 0xffffffffff;
gives x == 0 for all of the example messages in the document. But only for one of your examples in the question. You may not be extracting the data correctly.
I have written a small application which works at some point with binary data. In unit tests, I compare this data with the expected one. When an error occurs, I want the test to display the hexadecimal output such as:
Failure
Expected: string_to_hex(expected, 11)
Which is: "01 43 02 01 00 65 6E 74 FA 3E 17"
To be equal to: string_to_hex(writeBuffer, 11)
Which is: "01 43 02 01 00 00 00 00 98 37 DB"
In order to display that (and to compare binary data in the first place), I used the code from Stack Overflow, slightly modifying it for my needs:
std::string string_to_hex(const std::string& input, size_t len)
{
static const char* const lut = "0123456789ABCDEF";
std::string output;
output.reserve(2 * len);
for (size_t i = 0; i < len; ++i)
{
const unsigned char c = input[i];
output.push_back(lut[c >> 4]);
output.push_back(lut[c & 15]);
}
return output;
}
When checking for memory leaks with valgrind, I fould a lot of errors such as this one:
Use of uninitialised value of size 8
at 0x11E75A: string_to_hex(std::__cxx11::basic_string, std::allocator > const&, unsigned long)
I'm not sure to understand it. First, everything seems initialized, including, I'm mistaken, output. Moreover, there is no mention of size 8 in the code; the value of len varies from test to test, while valgrind reports the same size 8 every time.
How should I fix this error?
So this is one of the cases where passing a pointer to char that points to buffer filled with arbitrary binary data into evil implicit constructor of std::string class was causing string to be truncated to first \0. Straightforward approach would be to pass a raw pointer but a better solution is to start using array_view span or similar utility classes that will provide index validation at least in debug build for both input and lut.
I have a 320Mb binary file (data.dat), containing 32e7 lines of hex numbers:
1312cf60 d9 ff e0 ff 05 00 f0 ff 22 00 2f 00 fe ff 33 00 |........"./...3.|
1312cf70 00 00 00 00 f4 ff 1d 00 3d 00 6d 00 53 00 db ff |........=.m.S...|
1312cf80 b7 ff b0 ff 1e 00 0c 00 67 00 d1 ff be ff f8 ff |........g.......|
1312cf90 0b 00 6b 00 38 00 f3 ff cf ff cb ff e4 ff 4b 00 |..k.8.........K.|
....
Original numbers were:
(16,-144)
(-80,-64)
(-80,16)
(16,48)
(96,95)
(111,-32)
(64,-96)
(64,-16)
(31,-48)
(-96,-48)
(-32,79)
(16,48)
(-80,80)
(-48,128)
...
I have a matlab code which can read them as real numbers and convert them to complex numbers:
nsamps = (256*1024);
for i = 1:305
nstart = 1 + (i - 1) * nsamps ;
fid = fopen('data.dat');
fseek(fid,4 * nstart ,'bof');
y = fread(fid,[2,nsamps],'short');
fclose(fid);
x = complex(y(1,:),y(2,:));
I am using C++ and trying to get data as a vector<complex<float>>:
std::ifstream in('data.dat', std::ios_base::in | std::ios_base::binary);
fseek(infile1, 4*nstart, SEEK_SET);
vector<complex<float> > sx;
in.read(reinterpret_cast<char*>(&sx), sizeof(int));
and very confuse to get complex data using C++. Can anyone give me a help?
Theory
I'll try to explain some points using the issues in your code as examples.
Let's start from the end of the code. You try to read a number, which is stored as a four-byte single-precision floating point number, but you use sizeof(int) as a size argument. While on modern x86 platforms with modern compilers sizeof(int) tends to be equal to sizeof(float), it's not guaranteed. sizeof(int) is compiler dependent, so please use sizeof(float) instead.
In the matlab code you read 2*nsamps numbers, while in C++ code only four bytes (one number) is being read. Something like sizeof(float) * 2 * nsamps would be closer to matlab code.
Next, std::complex is a complicated class, which (in general) may have any implementation-defined internal representation. But luckily, here we read that
For any object z of type complex<T>, reinterpret_cast<T(&)[2]>(z)[0]
is the real part of z and reinterpret_cast<T(&)[2]>(z)[1] is the
imaginary part of z.
For any pointer to an element of an array of complex<T> named p and
any valid array index i, reinterpret_cast<T*>(p)[2*i] is the real part
of the complex number p[i], and reinterpret_cast<T*>(p)[2*i + 1] is
the imaginary part of the complex number p[i].
so we can just cast an std::complex to char type and read binary data there. But std::vector is a class template with it's implementation-defined internal representation as well! It means, that we can't just reinterpret_cast<char*>(&sx) and write binary data to the pointer, as it points to the beginning of the vector object, which is unlikely to be the beginning of the vector data. Modern C++ way to get the beginning of the data is to call sx.data(). Pre-C++11 way is to take an address of the first element: &sx[0]. Overwriting the object from the beginning will result in segfault almost always.
OK, now we have the beginning of the data buffer which is able to receive binary representation of complex numbers. But when you declared vector<complex<float> > sx;, it got zero size, and as you are not pushing or emplacing it's elements, the vector will not "know" that it should resize. Segfault again. So just call resize:
sx.resize(number_of_complex_numbers_to_store);
or use an appropriate constructor:
vector<complex<float> > sx(number_of_complex_numbers_to_store);
Before writing data to the vector. Note that these methods operate with "high-level" concept of number of stored elements, not number of bytes to store.
Putting it all together, the last two lines of your code should look like:
vector<complex<float> > sx(nsamps);
in.read(reinterpret_cast<char*>(sx.data()), 2 * nsamps * sizeof(float));
Minimal example
If you continue having troubles, try a simpler sandbox code first.
For example, let's write six floats to a binary file:
std::ofstream ofs("file.dat", std::ios::binary | std::ios::out);
float foo[] = {1,2,3,4,5,6};
ofs.write(reinterpret_cast<char*>(foo), 6*sizeof(float));
ofs.close();
then read them to a vector of complex:
std::ifstream ifs("file.dat", std::ios::binary | std::ios::in);
std::vector<std::complex<float>> v(3);
ifs.read(reinterpret_cast<char*>(v.data()), 6*sizeof(float));
ifs.close();
and, finally, print them:
std::cout << v[0] << " " << v[1] << " " << v[2] << std::endl;
The program prints:
(1,2) (3,4) (5,6)
so this approach works fine.
Binary files
Here is the remark about binary files which I initially posted as a comment.
Binary files haven't got the concept of "lines". The number of "lines" in binary file completely depends on the size of the window you are viewing it in. Think of binary files as of a magnetic tape, where each discrete position of the head is able to read only one byte. Interpretation of those bytes is up to you.
If everything should work fine, but you get weird numbers, check the displacement in fseek call. A mistake by a number of bytes yields random-looking values instead of the floats you wish to get.
Surely, you might just read a vector (or an array) of floats, observing the above considerations, and then convert them to complex numbers in a loop. Also, it's a good way to debug your fseek call to make sure that you start reading from the right place.
I've been scratching my head around calculating a checksum to communicate with Unitronics PLCs using binary commands. They offer the source code but it's in a Windows-only C# implementation, which is of little help to me other than basic syntax.
Specification PDF (the checksum calculation is near the end)
C# driver source (checksum calculation in Utils.cs)
Intended Result
Below is the byte index, message description and the sample which does work.
# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | 24 25 26 27 28 29 | 30 31 32
# sx--------------- id FE 01 00 00 00 cn 00 specific--------- lengt CHKSM | numbr ot FF addr- | CHKSM ex
# 2F 5F 4F 50 4C 43 00 FE 01 00 00 00 4D 00 00 00 00 00 01 00 06 00 F1 FC | 01 00 01 FF 01 00 | FE FE 5C
The specification calls for calculating the accumuated value of the 22 byte message header and, seperately, the 6+ byte detail, getting the value of sum modulo 65536, and then returning two's complement of that value.
Attempt #1
My understanding is the tilde (~) operator in Python is directly derived from C/C++. After a day of writing the Python that creates the message I came up with this (stripped down version):
#!/usr/bin/env python
def Checksum( s ):
x = ( int( s, 16 ) ) % 0x10000
x = ( ~x ) + 1
return hex( x ).split( 'x' )[1].zfill( 4 )
Details = ''
Footer = ''
Header = ''
Message = ''
Details += '0x010001FF0100'
Header += '0x2F5F4F504C4300FE010000004D000000000001000600'
Header += Checksum( Header )
Footer += Checksum( Details )
Footer += '5C'
Message += Header.split( 'x' )[1].zfill( 4 )
Message += Details.split( 'x' )[1].zfill( 4 )
Message += Footer
print Message
Message: 2F5F4F504C4300FE010000004D000000000001000600600L010001FF010001005C
I see an L in there, which is a different result to yesterday, which wasn't any closer. If you want a quick formula result based on the rest of the message: Checksum(Header) should return F1FC and Checksum(Details) should return FEFE.
The value it returns is nowhere near the same as the specification's example. I believe the issue may be one or two things: the Checksum method isn't calculating the sum of the hex string correctly or the Python ~ operator is not equivalent to the C++ ~ operator.
Attempt #2
A friend has given me his C++ interpretation of what the calculation SHOULD be, I just can't get my head around this code, my C++ knowledge is minimal.
short PlcBinarySpec::CalcHeaderChecksum( std::vector<byte> _header ) {
short bytesum = 0;
for ( std::vector<byte>::iterator it = _header.begin(); it != _header.end(); ++it ) {
bytesum = bytesum + ( *it );
}
return ( ~( bytesum % 0x10000 ) ) + 1;
}
I'm not entirely sure what the correct code should be… but if the intention is for Checksum(Header) to return f705, and it's returning 08fb, here's the problem:
x = ( ~( x % 0x10000 ) ) + 1
The short version is that you want this:
x = (( ~( x % 0x10000 ) ) + 1) % 0x10000
The problem here isn't that ~ means something different. As the documentation says, ~x returns "the bits of x inverted", which is effectively the same thing it means in C (at least on 2s-complement platforms, which includes all Windows platforms).
You can run into a problem with the difference between C and Python types here (C integral types are fixed-size, and overflow; Python integral types are effectively infinite-sized, and grow as needed). But I don't think that's your problem here.
The problem is just a matter of how you convert the result to a string.
The result of calling Checksum(Header), up to the formatting, is -2299, or -0x08fb, in both versions.
In C, you can pretty much treat a signed integer as an unsigned integer of the same size (although you may have to ignore warnings to do so, in some cases). What exactly that does depends on your platform, but on a 2s-complement platform, signed short -0x08fb is the bit-for-bit equivalent of unsigned 0xf705. So, for example, if you do sprintf(buf, "%04hx", -0x08fb), it works just fine—and it gives you (on most platforms, including everything Windows) the unsigned equivalent, f705.
But in Python, there are no unsigned integers. The int -0x08fb has nothing to do with 0xf705. If you do "%04hx" % -0x08fb, you'll get -8fb, and there's no way to forcibly "cast it to unsigned" or anything like that.
Your code actually does hex(-0x08fb), which gives you -0x8fb, which you then split on the x, giving you 8fb, which you zfill to 08fb, which makes the problem a bit harder to notice (because that looks like a perfectly valid pair of hex bytes, instead of a minus sign and three hex digits), but it's the same problem.
Anyway, you have to explicitly decide what you mean by "unsigned equivalent", and write the code to do that. Since you're trying to match what C does on a 2s-complement platform, you can write that explicit conversion as % 0x10000. If you do "%04hx" % (-0x08fb % 0x10000), you'll get f705, just as you did in C. And likewise for your existing code.
It's quite simple. I checked your friend's algorithm by adding all the header bytes manually on a calculator, and it yields the correct result (0xfcf1).
Now, I don't actually know python, but it looks to me like you are adding up half-byte values. You have made your header string like this:
Header = '2F5F4F504C4300FE010000004D000000000001000600'
And then you go through converting each byte in that string from hex and adding it. That means you are dealing with values from 0 to 15. You need to consider every two bytes as a pair and convert that (values from 0 to 255). Or you need to use actual binary data instead of a text representation of the binary data.
At the end of the algorithm, you don't really need to do the ~ operator if you don't trust it. Instead you can do (0xffff - (x % 0x10000)) + 1. Bear in mind that prior to adding 1, the value might actually be 0xffff, so you need to modulo the entire result by 0x10000 afterwards. Your friend's C++ version uses the short datatype so no modulo is necessary at all because the short will naturally overflow
What I must do is open a file in binary mode that contains stored data that is intended to be interpreted as integers. I have seen other examples such as Stackoverflow-Reading “integer” size bytes from a char* array. but I want to try taking a different approach (I may just be stubborn, or stupid :/). I first created a simple binary file in a hex editor that reads as follows.
00 00 00 47 00 00 00 17 00 00 00 41
This (should) equal 71, 23, and 65 if the 12 bytes were divided into 3 integers.
After opening this file in binary mode and reading 4 bytes into an array of chars, how can I use bitwise operations to make char[0] bits be the first 8 bits of an int and so on until the bits of each char are part of the int.
My integer = 00 00 00 00
+ ^ ^ ^ ^
Chars Char[0] Char[1] Char[2] Char[3]
00 00 00 47
So my integer(hex) = 00 00 00 47 = numerical value of 71
Also, I don't know how the endianness of my system comes into play here, so is there anything that I need to keep in mind?
Here is a code snippet of what I have so far, I just don't know the next steps to take.
std::fstream myfile;
myfile.open("C:\\Users\\Jacob\\Desktop\\hextest.txt", std::ios::in | std::ios::out | std::ios::binary);
if(myfile.is_open() == false)
{
std::cout << "Error" << std::endl;
}
char* mychar;
std::cout << myfile.is_open() << std::endl;
mychar = new char[4];
myfile.read(mychar, 4);
I eventually plan on dealing with reading floats from a file and maybe a custom data type eventually, but first I just need to get more familiar with using bitwise operations.
Thanks.
You want the bitwise left shift operator:
typedef unsigned char u8; // in case char is signed by default on your platform
unsigned num = ((u8)chars[0] << 24) | ((u8)chars[1] << 16) | ((u8)chars[2] << 8) | (u8)chars[3];
What it does is shift the left argument a specified number of bits to the left, adding zeros from the right as stuffing. For example, 2 << 1 is 4, since 2 is 10 in binary and shifting one to the left gives 100, which is 4.
This can be more written in a more general loop form:
unsigned num = 0;
for (int i = 0; i != 4; ++i) {
num |= (u8)chars[i] << (24 - i * 8); // += could have also been used
}
The endianness of your system doesn't matter here; you know the endianness of the representation in the file, which is constant (and therefore portable), so when you read in the bytes you know what to do with them. The internal representation of the integer in your CPU/memory may be different from that of the file, but the logical bitwise manipulation of it in code is independent of your system's endianness; the least significant bits are always at the right, and the most at the left (in code). That's why shifting is cross-platform -- it operates at the logical bit level :-)
Have you thought of using Boost.Spirit to make a binary parser? You might hit a bit of a learning curve when you start, but if you want to expand your program later to read floats and structured types, you'll have an excellent base to start from.
Spirit is very well-documented and is part of Boost. Once you get around to understanding its ins and outs, it's really mind-boggling what you can do with it, so if you have a bit of time to play around with it, I'd really recommend taking a look.
Otherwise, if you want your binary to be "portable" - i.e. you want to be able to read it on a big-endian and a little-endian machine, you'll need some sort of byte-order mark (BOM). That would be the first thing you'd read, after which you can simply read your integers byte by byte. Simplest thing would probably be to read them into a union (if you know the size of the integer you're going to read), like this:
union U
{
unsigned char uc_[4];
unsigned long ui_;
};
read the data into the uc_ member, swap the bytes around if you need to change endianness and read the value from the ui_ member. There's no shifting etc. to be done - except for the swapping if you want to change endianness..
HTH
rlc