how to calculate checksum as per AIS 140 standard - ais

I want to send a message to the server as per the AIS 140 standard. Please explain how to calculate the checksum. Find below the sample message format.
$Header,iTriangle,KA01I2000,861693034634154,1_37T02B0164MAIS,AIS140,12.976545,N,77.5497 59,E*50

As per AIS140 Standard Checksum is calculated by performing xor to all bytes received from packet.
Note: You have to remove '$'.
Caution: Use data from device to verify this code (Example provided from doc doesnt have valid checksum)
This Javascript code will help your job done.
function checksum(packet) {
const charArray = packet.split('');
let xor = 0;
const n = charArray.length;
for (let i = 1; i < n - 3; i++) {
xor = xor ^ charArray[i].charCodeAt(0);
}
const cs = parseInt("0x" + charArray[n - 2] + charArray[n - 1]);
return xor === cs;
}
checksum('$Header,iTriangle,KA01I2000,861693034634154,1_37T02B0164MAIS,AIS140,12.976545,N,77.549759,E*50')

Related

Why is there a loop in this division as multiplication code?

I got the js code below from an archive of hackers delight (view the source)
The code takes in a value (such as 7) and spits out a magic number to multiply with. Then you bitshift to get the results. I don't remember assembly or any math so I'm sure I'm wrong but I can't find the reason why I'm wrong
From my understanding you could get a magic number by writing ceil(1/divide * 1<<32) (or <<64 for 64bit values, but you'd need bigger ints). If you multiple an integer with imul you'd get the result in one register and the remainder in another. The result register is magically the correct result of a division with this magic number from my formula
I wrote some C++ code to show what I mean. However I only tested with the values below. It seems correct. The JS code has a loop and more and I was wondering, why? Am I missing something? What values can I use to get an incorrect result that the JS code would get correctly? I'm not very good at math so I didn't understand any of the comments
#include <cstdio>
#include <cassert>
int main(int argc, char *argv[])
{
auto test_divisor = 7;
auto test_value = 43;
auto a = test_value*test_divisor;
auto b = a-1; //One less test
auto magic = (1ULL<<32)/test_divisor;
if (((1ULL<<32)%test_divisor) != 0) {
magic++; //Round up
}
auto answer1 = (a*magic) >> 32;
auto answer2 = (b*magic) >> 32;
assert(answer1 == test_value);
assert(answer2 == test_value-1);
printf("%lld %lld\n", answer1, answer2);
}
JS code from hackers delight
var two31 = 0x80000000
var two32 = 0x100000000
function magic_signed(d) { with(Math) {
if (d >= two31) d = d - two32// Treat large positive as short for negative.
var ad = abs(d)
var t = two31 + (d >>> 31)
var anc = t - 1 - t%ad // Absolute value of nc.
var p = 31 // Init p.
var q1 = floor(two31/anc) // Init q1 = 2**p/|nc|.
var r1 = two31 - q1*anc // Init r1 = rem(2**p, |nc|).
var q2 = floor(two31/ad) // Init q2 = 2**p/|d|.
var r2 = two31 - q2*ad // Init r2 = rem(2**p, |d|).
do {
p = p + 1;
q1 = 2*q1; // Update q1 = 2**p/|nc|.
r1 = 2*r1; // Update r1 = rem(2**p, |nc|.
if (r1 >= anc) { // (Must be an unsigned
q1 = q1 + 1; // comparison here).
r1 = r1 - anc;}
q2 = 2*q2; // Update q2 = 2**p/|d|.
r2 = 2*r2; // Update r2 = rem(2**p, |d|.
if (r2 >= ad) { // (Must be an unsigned
q2 = q2 + 1; // comparison here).
r2 = r2 - ad;}
var delta = ad - r2;
} while (q1 < delta || (q1 == delta && r1 == 0))
var mag = q2 + 1
if (d < 0) mag = two32 - mag // Magic number and
shift = p - 32 // shift amount to return.
return mag
}}
In the C CODE:
auto magic = (1ULL<<32)/test_divisor;
We get Integer Value in magic because both (1ULL<<32) & test_divisor are Integers.
The Algorithms requires incrementing magic on certain conditions, which is the next conditional statement.
Now, multiplication also gives Integers:
auto answer1 = (a*magic) >> 32;
auto answer2 = (b*magic) >> 32;
C CODE is DONE !
In the JS CODE:
All Variables are var ; no Data types !
No Integer Division ; No Integer Multiplication !
Bitwise Operations are not easy and not suitable to use in this Algorithm.
Numeric Data is via Number & BigInt which are not like "C Int" or "C Unsigned Long Long".
Hence the Algorithm is using loops to Iteratively add and compare whether "Division & Multiplication" has occurred to within the nearest Integer.
Both versions try to Implement the same Algorithm ; Both "should" give same answer, but JS Version is "buggy" & non-standard.
While there are many Issues with the JS version, I will highlight only 3:
(1) In the loop, while trying to get the best Power of 2, we have these two statements :
p = p + 1;
q1 = 2*q1; // Update q1 = 2**p/|nc|.
It is basically incrementing a counter & multiplying a number by 2, which is a left shift in C++.
The C++ version will not require this rigmarole.
(2) The while Condition has 2 Equality comparisons on RHS of || :
while (q1 < delta || (q1 == delta && r1 == 0))
But both these will be false in floating Point Calculations [[ eg check "Math.sqrt(2)*Math.sqrt(0.5) == 1" : even though this must be true, it will almost always be false ]] hence the while Condition is basically the LHS of || , because RHS will always be false.
(3) The JS version returns only one variable mag but user is supposed to get (& use) even variable shift which is given by global variable access. Inconsistent & BAD !
Comparing , we see that the C Version is more Standard, but Point is to not use auto but use int64_t with known number of bits.
First I think ceil(1/divide * 1<<32) can, depending on the divide, have cases where the result is off by one. So you don't need a loop but sometimes you need a corrective factor.
Secondly the JS code seems to allow for other shifts than 32: shift = p - 32 // shift amount to return. But then it never returns that. So not sure what is going on there.
Why not implement the JS code in C++ as well and then run a loop over all int32_t and see if they give the same result? That shouldn't take too long.
And when you find a d where they differ you can then test a / d for all int32_t a using both magic numbers and compare a / d, a * m_ceil and a * m_js.

How to use fold statement index in function call

The fold manual gives an example:
input price = close;
input length = 9;
plot SMA = (fold n = 0 to length with s do s + getValue(price, n, length - 1)) / lenth;
This effectively calls a function iteratively like in a for loop body.
When I use this statement to call my own function as follows, then it breaks because the loop index variable is not recognized as a variable that can be passed to my function:
script getItem{
input index = 0;
plot output = index * index;
}
script test{
def total = fold index = 0 to 10 with accumulator = 0 do
accumulator + getItem(index);########## Error: No such variable: index
}
It is a known bug / limitation. Has been acknowledged without a time line for a fix. No workaround available.
Have you tried adding a small remainder to your defined variable within the fold and then pass that variable? You can strip the integer value and then use the remainder as your counter value. I've been playing around with somethin similar but it isn't working (yet). Here's an example:
script TailOverlap{
input i = 0;
def ii = (Round(i, 1) - i) * 1000;
... more stuff
plot result = result;
};
def _S = (
fold i = displace to period
with c = 0
do if
TailOverlap(i = _S) #send cur val of _S to script
then _S[1] + 1.0001 #increment variable and counter
else _S[1] + 0.0001 #increment the counter only
);
I'm going to continue playing around with this. If I get it to work I'll post the final solution. If you're able to get work this (or have discovered another solution) please do post it here so I know.
Thanks!

pycrypto is slow encrypting and decrypting

In practice, I select an executable. Size 20Mb.
I read the content using file.read(size=16).
If length of the returned byte string is less than 16 I fill the rest with \0 (NULL).
f = open("./installer.exe","rb")
obj = AES.new(b"0123456789012345",AES.MODE_CBC, b"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
bs = b""
t = f.read(16)
while t != b"":
if len(t) < 16:
t = t + b"\0" * (16 - len(t)) # if < 16 bytes using padding
bs = bs + obj.encrypt(t)
else:
bs = bs + obj.encrypt(t)
t = f.read(16)
then, bs contents the byte string of ALL content encrypted with 0123456789012345
I realise the mechanism of reading file first, then I encrypt the content as seen in the above piece of code (using obj.encrypt()). Then I write a new file with the content encrypted. The I read the data of encrypted file and by a similar procedure decrypt the data using obj.decrypt in intervals of 16 bytes and then I write a new file with the decrypted data.
This takes approximately 3 minutes.
¿It's fast, slow, or expected?
According to what I saw, the module is written in C. ¿Maybe should I use Cython embedded to make it faster?
How PGP can supposedly decrypt higher amounts of data in real time, for example, in an encrypted virtual disk?
edit:
This take almost same:
obj = AES.new(b"0123456789012345",AES.MODE_CBC, b"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
bs = b""
t = f.read(16)
while t != b"":
if len(t) < 16:
t = t + b"\0" * (16 - len(t))
bs = bs + t
else:
bs = bs + t
t = f.read(16)
bse = obj.encrypt(bs)
Ok. The problem was the size of the buffers encrypted. I decided to use 64000 bytes strings.
The procedure its simple. total size / string segments -> encrypt. And in the last segment, if the size of the segment is inferior to 64000 AND NOT multiple of 16, finds the nearest multiple and the remaining space is filled
bs = b""
dt = f.read()
dtl = len(dt)
dtr = ( dtl / 64000 ) + 1
for x in range(0, dtr):
if x == dtr-1:
i1 = 64000 * x
dst = dtl - i1
i = math.ceil(dst / 16.0) * 16
dst = i - dst
buf = dt[i1:] + (b"\0" * int(dst))
bs = bs + obj.encrypt(buf)
else:
i1, i2 = 64000 * x , 64000 * (x+1)
bs = bs + obj.encrypt(dt[i1:i2])
Now takes 10 seconds.
Thanks for all.

Reversed message CRC calculation

Consider you have this message (ab,cd,ef) and you have the ROHC (Robust header compression) CRC8 polynomial e0.
C(x) = x^0 + x^1 + x^2 + x^8
Is there any way that I can calculate the CRC on the message backward starting from the last byte and get the same results as if I am calculating it on the original message?
No this is generally not possible for your polynomial (100000111).
EG: 110100111/100000111 = 011010011
but: 111001011/xxxxxxxxx != 011010011 (in general)
However, you can still check for the validity of your message if you know the CRC beforehand.
EG: 110100111/100000111 = 01101001
=> message transmitted = 11010011 01101001
=> message received (reversed) = 10010110 11001011
then: 10010110 11001011/111000001 == 0
(where: 111000001 = reversed(100000111))
=> crc(reversed(11001011)) = crc(11010011) == reversed(10010110) = 01101001
Note that this is only true if the message is reversed BITEWISE.
IE: reversed(ABC) = reversed(101010111100) = 001111010101
= 3D5 = reversed(ABC) != CBA = 110010111010 != reversed(101010111100)
So be careful when implementing your algorithm ;-)

How do I enumerate resolutions supported via TWAIN

I have to enumerate DPI's supported by scanner via TWAIN interface.
// after Acquire is called...
TW_CAPABILITY twCap;
GetCapability(twCap, ICAP_XRESOLUTION)
if (twCap.ConType == TWON_ENUMERATION) {
pTW_ENUMERATION en = (pTW_ENUMERATION) GlobalLock(twCap.hContainer);
for(int i = 0; i < en->NumItems; i++) {
if (en->ItemType == TWTY_FIX32) {
TW_UINT32 res = (TW_UINT32)(en->ItemList[i*4]);
// print res...
}
That works fine but output sequence is strange:
50
100
150
44
88
176
I know exactly that my scanner supports 300 DPI but this value doesn't returned.
What I do wrong here? Why "300" is not returned in sequence though I can set it programmatically?
The code you shown takes just the lower byte of the resolutions, and then converts it to integer (the pointer points to chars, so the line fetch just a char and then converts it to integer).
You must specify that the pointer points to TW_UNIT32 values BEFORE reading the value.
The number 44 for instance, is the lower byte of the number 300 (300 DPI)
The following code should do it:
TW_UINT32 res = ((TW_UINT32*)(en->ItemList))[i];
or
TW_UINT32 res = *((TW_UINT32*)(en->ItemList + i * 4));