Does anyone know of some CRC test vectors for CRC16-CCITT?
I do not have a CRC implementation I can trust and either need to test someone's implementation or my own. (For CRC32, I use the PNG code as the gold standard, as it's a reputable reference implementation.)
(this site's CRC calculator looks useful but I need to verify correctness somehow)
UPDATE: The above CRC calculator looks useful but it takes only ascii, no way to enter hex. --- it's very awkward to enter hex input, though. (ASCII 12 in hex can be entered as %31%32, so you can't just copy+paste a long string of hexadecimal bytes; also the % character doesn't seem to have an escape)
I have verified this online calculator, which takes hex inputs, against the Boost test vectors for CRC16, CRC16-CCITT, and CRC32.
Boost has a nice CRC implementation you can test against. As far as I know it's possible to configure it for CRC16.
http://www.boost.org/doc/libs/1_41_0/libs/crc/index.html
There seems to be an example of how to set it up to simulate CCITT on this page: http://www.boost.org/doc/libs/1_41_0/libs/crc/crc.html
Python's binascii package has had CRC-16 for a while.
Use binascii.crc_hqx(val, 0xFFFF) - the previous example are...
$ python3
Python 3.7.3 (default, Dec 20 2019, 18:57:59)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import binascii
>>> tv = binascii.a2b_hex("12345670")
>>> hex(binascii.crc_hqx(tv, 0xFFFF))
'0xb1e4'
>>> tv = "123456789".encode("ascii")
>>> hex(binascii.crc_hqx(tv, 0xFFFF))
'0x29b1'
Here are two test vectors for CCITT-16 CRC (whose polynomial is X16 + X12 + X5 + 1 (0x1021 in big-endian hex representation); initial CRC value is 0xFFFF. XOR value out is zero.) are:
0x12345670 = 0xB1E4
0x5A261977 = 0x1AAD
I've found this one:
http://introcs.cs.princeton.edu/java/51data/CRC16CCITT.java.html
"123456789".getBytes("ASCII"); -> 0x29b1
Related
I want to use the below equation in one of the code
A = g^a mod p; //g raise to a modulus p.
(something like 2^5 % 3) = 32%3 = 2
(This equation looks like Diffie Hellman algorithm for security)
Where:
^ is (power)
g is fixed number 0x05
a is 128bit(16bytes) randomly generated number,
p is fixed hex number of 128bit(16bytes). Something like (0x0xD4A283974897234CE908B3478387A3).
I am using:
Qt 4.8.7
Compiler MinGW32 (checked with boost library boost 1.70)
The solutions which I found which didn`t work for me are listed below:
one can use __int128 but to support that one should have used
latest GCC compiler or MinGW64 bit compiler, neither of that I am using now.
I found one latest version of Qt has QSslDiffieHellmanParameters class,
but again not supported in our Qt version.
I found some libraries like boost/multiprecision/cpp_int.hpp (boost 1.70))
that does have data type such as int128_t and int256_t, but due to
our compiler isssue or something else, we are not able to store
128bit number, meaning
if I do:
int128_t ptval128 = 0xAB1232423243434343BAE3453345E34B;
cout << "ptval128 = " << std::hex << ptval128 << endl;
//will print only 0xAB12324232434343;//half digits only,
I tried using Bigint which much more useful, but again
5^(128bit number) is way too big, it takes hours to compute things,
(I waited till 1 hour and 16 mins and kill the application).
int myGval = 0x05;
128_bit_data_type myPVal= 0xD4A283974897234CE908B3478387A3;
128_bit_data_type 128_bit_variable = 128_bit_random_data;
myVal = (myGval)^(128_bit_variable) % (myPVal);
That is not how to do modular exponentiation! The first problem is that 5 ^ 128_bit_variable is huge, so big that it won't fit into memory in any computers available today. To keep the required storage space within bounds, you have to take the remainder % myPVal after every operation.
The second problem is that you can't compute 5 ^ 128_bit_variable simply by multiplying by 5 by itself 128_bit_variable times -- that would take longer than the age of the universe. You need to use an exponentiation ladder, which requires just 128 squarings and at most 128 multiplications. See this Wikipedia article for the details. In the end, the operation 5 ^ 128_bit_number should take a fraction of a second.
I've installed the Tensorflow package and complied it from source for IBM s390x architecture. The image recognition classify_image.py sample as described in the tutorial throws an error as shown below:
Run command:
python ./classify_image.py --model_dir=/data/shared/myprojects/tensorflow/models/models-master/tutorials/image/imagenet --image_file=/data/shared/myprojects/keras/images/claude_profile.jpg
Error message:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 659, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Cannot reshape a tensor with 1041082757314414592 elements to shape [16777216,524288] (8796093022208 elements) for 'pool_3/_reshape' (op: 'Reshape') with input shapes: [1,22546423,22546423,2048], [2] and with input tensors computed as partial shapes: input[1] = [16777216,524288].
Version:
python
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.VERSION
'1.3.1'
>>>
A possible error cause is an incompatibility of endianness as the trained model likely is stored in a little endian notation while the CPU works in a big endian mode. Is there an easy way to configure a byte swapping that changes the endianness of the input data? Other Tensorflow samples, without image processing routines execute OK.
1041082757314414592 sounds more like an overflow / underflow than endianness issue. If you don't try to load an example but try to run one from scratch do you also see issues?
This seems to be happening because the inception model being a pre-trained model on a Little Endian machine, will have issues while loading on Big Endian(s390x). Also any graph (eg. classify_image_graph_def.pb) will store values like size in one format, which gives unexpected results when read in the other.
As far as I know there is no tool available yet to convert any saved model to be compatible on big endian.
So for now, on big endian, we need to train our nodes from scratch.
Recently, I started working on a project relevant to emac and came across few doubts and blockages with respect to implementation, and decided to post my Q here to get some advise and suggestions from experienced people.
At present, I am working on interfacing the EMAC-DM9161A module with my SAM3x - Taiji Uino board for high speed ethernet communication.I am using the library developed by Palliser which is uploaded on Github as elechouse/EMAC-Demo. In the source code - ethernet_phy.c, I came across this function to initialize the DM9161A PHY component as follows:
unit8_t ethernet_phy_init(Emac*p_emac, uint8_t uc_phy_addr, uint32_t mck);
Problem: The argument uint8_t uc_phy_addr is an 8 bit register through which I want to pass a 48 bit MAC address such as - 70-62-D8-28-C2-8E. I understand that, I can use two 32 bit registers to store the first 32 bit of the MAC address i.e. 70-62-D8-28 in one 32 bit register and the rest 16 bit MAC address i.e. C2-8E in another 32 bit register. However, I cannot do this, since I need to use the above ethernet_phy_init function in which a unit8_t is used to pass the 48 bit MAC address. So, I'd like to know, how to make this happen?
Another Question: I executed some code to understand by some trial methods and came across some doubts,here is the code:
int main()
{
unit8_t phy_addr =49; //Assign a value 49 to 8 bit Reg
int8_t phy_addr1 = 49;
int phy_addr2 = 49;
cout<<phy_addr;
cout<<phy_addr1
cout<<phy_addr2;
getchar();
return 0;
}
Output Results:
1
1
49
So my doubt is, why is the output being displayed in ASCII character wherever I use a 8 bit variable to store the value 49, but when I use a normal 32 bit int variable to store 49, it displays a decimal value of 49. Why does this happen?. And lastly how to store MAC address in an 8 bit register?
About second question:
uint8_t/int8_t is same as unsigned/signed char and cout will handli it as char. Use static_cast<int> to print as number.
About first quiestion:
I never worked with emac, but judging by this example mac should be set this way:
#define ETHERNET_CONF_ETHADDR0 0x00
#define ETHERNET_CONF_ETHADDR0 0x00
#define ETHERNET_CONF_ETHADDR1 0x04
#define ETHERNET_CONF_ETHADDR2 0x25
#define ETHERNET_CONF_ETHADDR3 0x1C
#define ETHERNET_CONF_ETHADDR4 0xA0
#define ETHERNET_CONF_ETHADDR5 0x02
static uint8_t gs_uc_mac_address[] =
{ ETHERNET_CONF_ETHADDR0, ETHERNET_CONF_ETHADDR1, ETHERNET_CONF_ETHADDR2,
ETHERNET_CONF_ETHADDR3, ETHERNET_CONF_ETHADDR4, ETHERNET_CONF_ETHADDR5
};
emac_options_t emac_option;
memcpy(emac_option.uc_mac_addr, gs_uc_mac_address, sizeof(gs_uc_mac_address));
emac_dev_init(EMAC, &gs_emac_dev, &emac_option);
Regarding your second question: the first 2 variables are 8bit (one signed and one unsigned), so the ostream assumes they are chars (also 8bit wide) and displays the char representation for them ("1" = ASCII 49).
As for original question, i browsed a little bit the Atmel sources and the MAC address has nothing to do in ethernet_phy_init (all is at a much lower level):
uc_phy_addr - seems to be interface index
mck - seems to be a timer related value.
I figured it out, so I am going to answer my own question for those beginners like me who may encounter this same doubt.
Answer: As suggested by the members in the comments, yes they were right and thanks to them. The function parameter uint8_t uc_phy_addr represents the 5 bit port address in the PHY chip - Register and not the MAC Address, hence the address is set as 0x01 to enable only the receive pin and keeping the other 4 bits 0. The 4th bit is the CSR which is also set 0 in this case (for more details, Please refer data sheet of DM9161A).
The GNU C Library has the function drem (alias remainder).
How can I simulate this function just using the modules supported by Google App Engine Python 2.7 runtime?
From the GNU manual for drem:
These functions are like fmod except that they round the internal quotient n to the nearest integer instead of towards zero to an integer. For example, drem (6.5, 2.3) returns -0.4, which is 6.5 minus 6.9.
From the GNU manual for fmod:
These functions compute the remainder from the division of numerator by denominator. Specifically, the return value is numerator - n * denominator, where n is the quotient of numerator divided by denominator, rounded towards zero to an integer. Thus, fmod (6.5, 2.3) returns 1.9, which is 6.5 minus 4.6.
Reading the documentation the following Python code should work:
def drem(x, y):
n = round(x / y)
return x - n * y
However with Python, drem(1.0, 2.0) == -1.0 and with C drem(1.0, 2.0) == 1.0. Note Python returns negative one and C returns positive one. This is almost certainly an internal difference in rounding floats. As far as I can tell both functions perform the same otherwise where parameters 2 * x != y.
How can I make my Python drem function work the same as its C equivalent?
The key to solving this problem is to realise that the drem/remainder function specification requires the internal rounding calculation to round to half even.
Therefore we cannot use the built-in round function in Python 2.x as it rounds away from 0. However the round function in Python 3.x has changed to round to half even. So the following Python 3.x code will be equivalent to the GNU C Library drem function but will not work in Python 2.x:
def drem(x, y):
n = round(x / y)
return x - n * y
To achieve the same with Python 2.x we can use the decimal module and its remainder_near function:
import decimal
def drem(x, y):
xd = decimal.Decimal(x)
yd = decimal.Decimal(y)
return float(xd.remainder_near(yd))
EDIT: I just read your first comment and see that you cannot use the ctypes module. Anyways, I learned a lot today by trying to find an answer to your problem.
Considering that numpy.round() rounds values exactly halfway between rounded decimal values to the next even integer, using numpy is not a good solution.
Also, drem internally calls this MONSTER function, which should be hard to implement in Python.
Inspired by this article, I would recommend you to call the drem function from the math library directly. Something along these lines should do the trick:
from ctypes import CDLL
# Use the C math library directly from Python
# This works for Linux, but the version might differ for some systems
libm = CDLL('libm.so.6')
# For Windows, try this instead:
# from ctypes import cdll
# libm = cdll.libc
# Make sure the return value is handled as double instead of the default int
libm.drem.restype = c_double
# Make sure the arguments are double by putting them inside c_double()
# Call your function and have fun!
print libm.drem(c_double(1.0), c_double(2.0))
I have a Python module that is operating as a server for a wireless handheld computer. Every time the handheld sends a message to the server, the module determines what kind of message it is, and then assembles an appropriate response. Because the responses are often state-dependent, I am using global variables where needed to retain/share information between the individual functions that handle each type of message.
The problem I'm having is when the application is closed (for whatever reason), the global variable values are (of course) lost, so on re-launching the application it's out of synch with the handheld. I need a reliable way to store those values for recovery.
The direction I've gone so far (but have not gotten it to work yet) is to write the variable names and their values to a CSV file on the disk, every time they're updated -- and then (when the app is launched), look for that file and use it to assign the variables to their previous states. I have no trouble writing the file or reading it, but for some reason the values just aren't getting assigned.
I can post the code for comments/help, but before that I wanted to find out whether I'm just going an entirely wrong direction in the first place. Is there a better (or at least preferable) way to save and recover these values?
thanks,
JDM
====
Following up. It may be a touch klunky, but here's what I have and it's working. The only globals I care about are the ones that start with "CUR_". I had to use tempDict1 because the interpreter doesn't seem to like iterating directly over globals().
import pickle
CUR_GLO1 = 'valglo1'
CUR_GLO2 = 'valglo2'
CUR_GLO3 = 'valglo3'
def saveGlobs():
tempDict1 = globals().copy()
tempDict2 = {}
for key in tempDict1:
if (key[:4]=='CUR_'):tempDict2[key] = tempDict1[key]
pickle.dump(tempDict2,open('tempDict.p','wb'))
def retrieveGlobs():
tempDict = pickle.load(open('tempDict.p','rb'))
globals().update(tempDict)
writing it up as an answer..
What I think you want to do is a form of application checkpointing.
You can use the Pickle module for conveniently saving and loading Python variables. Here is a simple example of how to use it. This discussion on Stackoverflow and this note seem to agree, although part of me thinks that there must be a better way.
Incidentally, you don't need to put everything into a dictionary. As long as you dump and load variables in the right order, and make sure that you don't change that, insert data in the middle etc, you can just dump and load several variables. Using a dictionary like you did does remove the ordering dependency though.
% python
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pickle
>>> foo=123
>>> bar="hello"
>>> d={'abc': 123, 'def': 456}
>>> f=open('p.pickle', 'wb')
>>> pickle.dump(foo, f)
>>> pickle.dump(bar, f)
>>> pickle.dump(d, f)
>>> ^D
% python
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pickle
>>> f=open('p.pickle','rb')
>>> foo=pickle.load(f)
>>> foo
123
>>> bar=pickle.load(f)
>>> bar
'hello'
>>> d=pickle.load(f)
>>> d
{'abc': 123, 'def': 456}
>>>