Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have the following setup where I am trying to write a custom file header. I have the fields which are described as follows:
// Assume these variables are initialized.
unsigned short x, y, z;
unsigned int dl;
unsigned int offset;
// End assume
u_int8_t major_version = 0x31;
u_int8_t minor_version = 0x30;
u_int_32_t data_offset = (u_int_32_t)offset;
u_int16_t size_x = (u_int16_t)x;
u_int16_t size_y = (u_int16_t)y;
u_int16_t size_z = (u_int16_t)z;
u_int_32_t data_size = (u_int_32_t)dl;
Now, what I would like to do is compute an 8 bit header checksum from the fields starting from major revision to the data_size variables. I am fairly new to this and something simple would suffice for my current needs.
I'm not sure what you are trying to achieve, but to strictly answer your question, a 8-bit checksum is usually computed as
sum_of_all_elements %255
Simply put, add all the elements together, and sum % 255 is your checksum.
Watch out for overflows when doing the addition (you can compute partial sums if you run into trouble).
On a side note, a 8-bit checksum is not that good - It won't help you distinguish all cases and it won't help with data recovery; that's why a 16-bit checksum is usually preferred.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am working on the following lines of code:
#define WDOG_STATUS 0x0440
#define ESM_OP 0x08
and in a method of my defined class I have:
bool WatchDog = 0;
bool Operational = 0;
unsigned char i;
ULONG TempLong;
unsigned char Status;
TempLong.Long = SPIReadRegisterIndirect (WDOG_STATUS, 1); // read watchdog status
if ((TempLong.Byte[0] & 0x01) == 0x01)
WatchDog = 0;
else
WatchDog = 1;
TempLong.Long = SPIReadRegisterIndirect (AL_STATUS, 1);
Status = TempLong.Byte[0] & 0x0F;
if (Status == ESM_OP)
Operational = 1;
else
Operational = 0;
What SPIReadRegisterInderect() does is, it takes an unsigned short as Address of the register to read and an unsigned char Len as number of bytes to read.
What is baffling me is Byte[]. I am assuming that this is a method to separate some parts of byte from the value in Long ( which is read from SPIReadRegisterIndirect ). but why is that [0]? shouldn't it be 1? and how is that functioning? I mean if it is isolating only one byte, for example for the WatchDog case, is TempLong.Byte[0] equal to 04 or 40? (when I am printing the value before if statement, it is shown as 0, which is neither 04 nor 40 from WDOG_STATUS defined register.)
Please consider that I am new to this subject. and I have done google search and other searchs but unfortunatly I could not found what I wanted. Could somebody please help me to understand how this syntax works or direct me to a documentation where I can read about it?
Thank you in advance.
Your ULONG must be defined somewhere.
Else you'd get the syntax error 'ULONG' does not name a type
Probably something like:
typedef union {unsigned long Long; byte Byte[4];} ULONG;
Check union ( and typedef ) in your C / C++ book, and you'll see that
this union helps reinterpreting the long variable as an array of bytes.
Byte[0] is the first byte. Depending on the hardware (avr controllers are little endian) it's probably the LSB (least significant byte)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm implementing a serial binary protocol with C++ as following format:
| sync word (0xAA) | sync word (0xBB) | message length (2 bytes) | device id (4 bytes) | message type (1 byte) | timestamp (4 bytes) | payload (N bytes) | crc (2 bytes) |
For binary operation, I can only come up C-style way. For example, use memcpy to extract each field with pre-defined offset:
void Parse(string data) {
bool found_msg_head = false;
int i=0;
for (; I<data.size()-1; i++) {
if (data[i] == 0xAA && data[i+1] == 0xBB) {
found_msg_head = true;
break;
}
}
if (found_msg_head) {
uint16_t msg_length;
uint32_t device_id;
// declare fields ...
memcpy(&msg_length, data.c_str()+i+2, 2);
memcpy(&device_id, data.c_str()+i+4, 4);
// memcpy to each field ...
// memcpy crc and validate...
}
}
void SendMsg(const MyMsg& msg)
{
const uint16_t msg_len = sizeof(msg) + 16;
const uint32_t dev_id = 0x01
const uint32_t timestamp = (uint32_t)GetCurrentTimestamp();
const uint8_t msg_type = 0xAB;
char buf[msg_len];
buf[0] = 0xAA;
buf[1] = 0xBB;
memcpy(buf + 2, &msg_len, sizeof(msg_len));
memcpy(buf + 4, &dev_id, sizeof(dev_id));
memcpy(buf + 8, ×tamp, sizeof(timestamp));
memcpy(buf + 12, &msg_type, sizeof(msg_type));
memcpy(buf + 13, &g_msg_num, sizeof(g_msg_num));
memcpy(buf + 14, &msg, sizeof(msg));
uint16_t crc = CalculateCrc16((uint8_t*)buf, sizeof(DevSettingMsg) + 14);
memcpy(buf + sizeof(buf) - 2, &crc, 2);
std::string str(buf, sizeof(buf));
// send str to serial port or other end points...
}
Is there any better way to implement this kind of protocols with C or C++?
If I want to keep my C++ code in C++ style (i.e. use C++ STL only, no memcpy, no & for variable address), what's the equivalent of above C, C++ mixed code in C++ only?
Thanks.
The binary communication protocols are notorious for requiring lots of error-prone boilerplate code. I strongly recommend looking into existing libraries and/or code generation solutions, the CommsChampion Ecosystem could be a good fit.
I recommend documenting your protocol as well as possible.
The X11 protocol specification could be inspirational.
Did you consider re-using some existing communication protocol e.g. HTTP ? Do you care about communication between heterogenous systems and machines with different endianness and word sizes (e.g. a RaspberryPI communicating with an Arduino or a Linux/PowerPC or a Linux/x86-64 server)?
You could use C++ code generators like SWIG, or C code generators like RPCGEN, or in some cases write your own C++ code generator (perhaps with the help of GNU m4 or of GPP or GNU autoconf)
You may use (or adapt) existing C++ frameworks like VMIME (which implement mail related protocols)
You could use existing Web service protocols (HTTP like) with libraries like libonion or Wt.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
In my project, I am responsible for migrating some MATLAB code to C++. The code below refers to serial communication from a computer to a microcontroller. The function CreatePackage generates a package which is then sent to the microcontroller using MATLAB's fwrite(serial) function.
function package = CreatePackage(V)
for ii = 1:size(V,2)
if V(ii) > 100
V(ii) = 100;
elseif V(ii) < -100
V(ii) = -100;
end
end
vel = zeros(1, 6);
for ii = 1:size(V,2)
if V(ii) > 0
vel(ii) = uint8(V(ii));
else
vel(ii) = uint8(128 + abs(V(ii)));
end
end
package = ['BD' 16+[6, vel(1:6)], 'P' 10 13]+0;
And then, to send the package:
function SendPackage(S, Package)
for ii = 1:length(S)
fwrite(S(ii), Package);
end
How can I create an array/vector in C++ to represent the package variable used in the MATLAB code above?
I have no experience with MATLAB so any help would be greatly apreciated.
Thank you!
The package variable is being streamed as 12, unsigned 8-bit integers in your MATLAB code, so I would use a char[12] array in C++. You can double check sizeof(char) on your platform to ensure that char is only 1 byte.
Yes, MATLAB default data-type is a double, but that does not mean your vector V isn't filled with integer values. You have to look at this data or the specs from your equipment to figure this out.
Whatever the values are coming in, you are setting/clipping the outgoing range to [-100, 100] and then offsetting them to the byte range [0, 255].
If you do not know a whole lot about MATLAB, you may be able to leverage what you know from C++ and use C as an interim. MATLAB's fwrite functionality lines up with that of C's, and you can include these functions in C++ with the #include<cstdio.h> preprocessor directive.
Here is an example solution:
#include <cstdio.h> // fwrite
#include <algorithm> // min, max
...
void makeAndSendPackage(int * a6x1array, FILE * fHandles, int numHandles){
char packageBuffer[13] = {'B','D',24,0,0,0,0,0,0,'P','\n','\r',0};
for(int i=0;i<6;i++){
int tmp = a6x1array[i];
packageBuffer[i+3] = tmp<0: abs(max(-100,tmp))+144 ? min(100,tmp)+16;
}
for(int i=0;i<6;i++){
fwrite(fHandles[i],"%s",packageBuffer);
}
}
Let me know if you have questions about the above code.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
I have the following:
struct sockaddr_in* server = (struct sockaddr_in*)sockAddr;
ULONG addr = htonl(server->sin_addr.s_addr);
USHORT port = server->sin_port;
char* frame[sizeof(USHORT) + sizeof(ULONG)];
memcpy(frame, &port, sizeof(USHORT));
memcpy(&frame[2], &addr, sizeof(ULONG));
int result = send(s, (const char*)frame, sizeof(frame), 0);
The size of USHORT is 2 bytes and ULONG 4, why when receiving the total result of bytes is 24 instead?
frame is an array of char pointer, and I assume that char pointers on your system is 4 bytes. That's why sizeof(frame) is 24 bytes. You probably want an array of chars instead.
// array of char, not char *
char frame[sizeof(USHORT) + sizeof(ULONG)];
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
[Sorry for the confusion: The original post had "SSD" instead of "HDD" in the title, but I figured out that I performed the tests on an HDD by accident, as I was accessing the wrong mounting point. On an SSD this phenomenon did not occur. Still interesting that it happens for an HDD though.]
I am using the following code for reading in a loop from a given number of files of constant size. All files to be read exist and reading is successful.
It is clear that varying the file size has an effect on fMBPerSecond, because when reading files smaller than the page size, still the whole page is read. However, nNumberOfFiles has an effect on fMBPerSecond as well, which is what I do not understand. Apparently, it is not nNumberOfFiles itself that has an effect, but it is the product nNumberOfFiles * nFileSize.
But why should it have an effect? The files are opened/read/closed sequentially in a loop.
I tested with nFileSize = 65536. When choosing nNumberOfFiles = 10000 (or smaller) I get something around fMBPerSecond = 500 MB/s. With nNumberOfFiles = 20000 I get something around fMBPerSecond = 100 MB/s, which is a dramatic loss in performance.
Oh, and I should mention that I clear the disk cache before reading by executing:
sudo sync
sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
Any ideas what is happening here behind the scenes would be welcome.
Pedram
void Read(int nFileSize, int nNumberOfFiles)
{
char szFilePath[4096];
unsigned char *pBuffer = new unsigned char[nFileSize];
Helpers::get_timer_value(true);
for (int i = 0; i < nNumberOfFiles; i++)
{
sprintf(szFilePath, "files/test_file_%.4i", i);
int f = open(szFilePath, O_RDONLY);
if (f)
{
if (read(f, pBuffer, (ssize_t) nFileSize) != (ssize_t) nFileSize)
printf("error: could not read file '%s'\n", szFilePath);
close(f);
}
else
{
printf("error: could not open file for reading '%s'\n", szFilePath);
}
}
const unsigned int t = Helpers::get_timer_value();
const float fMiliseconds = float(t) / 1000.0f;
const float fMilisecondsPerFile = fMiliseconds / float(nNumberOfFiles);
const float fBytesPerSecond = 1000.0f / fMilisecondsPerFile * float(nFileSize);
const float fMBPerSecond = fBytesPerSecond / 1024.0f / 1024.0f;
printf("t = %.8f ms / %.8i bytes - %.8f MB/s\n", fMilisecondsPerFile,
nFileSize, fMBPerSecond);
delete [] pBuffer;
}
There are several SSD models, especially the more expensive datacenter models, that combine an internal DRAM cache with the (slower) persistent NAND cells. As long as the data you read fits in the DRAM cache, you'll get faster response.