i have an application which tracks body movement, it can also act like a server to send the body tracking data out via TCP.
My Client connects to the server and gets the data.
Before the data is send to a client something like this happens :
MTBF v10 = bodyPose2MTBF_v10(pose, m_startup, m_stream_position, 1);
ss << v10.get();
m_body_tracking_server->send(ss);
where the pose has all the information like position of the head as a vector, leg, shoulder position etc..SS is a new stringstream, and with v10.get() the stream is filled with the pose as a string. In the send method the stringstream(ss) is sent to the client.
In my client i get the stringstream. I also converted it back to mtbf. The problem is, the code is not from me, so i dont understand it all. When the pose is converted to mtbf something like this happens :
data.addFlag(0);
data << torso.x() << torso.y() << torso.z();
sensorConfig |= MTBFData_0_4::RightArmPosition;
data.addFlag(1);
.
.
.
I get that it is adding a "flag" so that i can recognize the torso for example, followed by the x y and z coordinates of the torso. but the add flag method looks like this :
addFlag(unsigned char n)
{
m_flag |= (1i64 << n);
m_buf.replace(7+8, 8, reinterpret_cast<char*>(&m_flag), 8);
return *this;
}
What exactly does this method?? shifting some stuff and reinterpret the char...i really need help to understand this!
thank you!
It's hard to say for sure without seeing more code but...
m_flag |= (1i64 << n); ... m_flag is probably an unsigned 64 bit int and this is ORing the nth bit of m_flag with 1.
m_buf.replace(7+8, 8, reinterpret_cast<char*>(&m_flag), 8); ... now that the flag has been modified, it replaces the current flag value in the buffer with the new flag value.
addFlag is designed to accumulate flags in the m_flag member (some kind of 64 bit integer). Each call updates a fixed, 8-byte field within m_buf with the latest m_flag value.
m_flag is able to accumulate several flags, each added with separate calls to addFlag.
Keep in mind that the flags in m_flags will accumulate, persisting until some other member function clears it.
Each flag value is associated with a single bit within a 64-bit integer value. This kind of program usually defines a list of possible flag values somewhere, either with a bunch of #define lines or as enums. Look for values between 0 and 63.
Related
I need to pack this pointer (which is 64 bits wide) into 4 WORD's, and then later in some other part of the code I need to extract (assemble) these words back into this pointer.
the code looks like this:
std::vector<WORD> vec;
vec.push_back( this ); // how?
later in the code;
pThis = vec.at(0); // how?
I did take a look at LOWORD/HIWORD and LOWBYTE/HIBYTE macros however I still have no idea how would I go about this.
If you ask why on earth would anyone need this, here is why:
I need to fill in creation data of DLGITEMTEMPLATEEX structure which takes WORD as last argument to specify size, data following this is where you put your data, my data is this pointer, and since I'm working on words (std::vector<WORD>) to fill the structure, the last data is 4 WORDS (64 bits) representing the pointer!
Any suggestion or sample code is welcome.
Well, the best way would be to define a struct that derive form DLGITEMTEMPLATEEX. That way, you can avoid to manually do the conversion. While it might not be defined from the standard, it will works on Windows. That kind of code if platform specific anyway!
struct MyTemplate : DLGITEMTEMPLATEEX
{
MyTemplate(MyPointerType *myVariable)
: extraCount(sizeof *this - sizeof(DLGITEMTEMPLATEEX))
, myVariable(myVariable)
{ }
MyPointerType *myVariable; // 64 if compiled for 64 bit target.
};
And when using the data, you do a static_cast to convert back to that structure.
You could add a few static_assert and assertion to validate that it works as intended.
You could use simple bitshifting to seperate the 64bit into 16bits.
#include <stdio.h>
int main(){
uint64_t bigword = 0xDEADBEEFADEADBED;
uint16_t fourth = bigword ;
uint16_t third = bigword >> 16;
uint16_t second = bigword >> 32;
uint16_t first = bigword >> 48;
printf("%llx %x %x %x %x\n",bigword,fourth,third,second,first);
return 0;
}
Then reverse the process when shifting the words back into the 64bit.
From what I understand, the bitwise inclusive OR operator compares every bit in the first and second operand and returns 1 if either bit is 1. Bjarne Stroustrup uses it like this (ist being an istream object):
ist.exceptions(ist.exceptions()|ios_base::bad_bit);
I haven't really worked with bits a lot in programming, should it be on my to-do list to learn? I understand that if I had an int and the value was 9, the binary would be 00001001, but that is pretty much it. I do not understand why he would use this operator in the context that he used it in.
In this case, it simply means "turn a bit on".
Just an example: I have a byte 0100 0011 serves as 8 booleans. I want to turn on 4th bit (i.e. make 4th boolean true)
By bitwise operation, it looks like this: [0100 0011] Bitwise-OR [0000 1000] and it will give you 0100 1011. Which means, it simply change 4th bit to true, regardless of its original value
You can think of it as a way to add an option to a set of existing options. An analogy would be if you're familiar with linux:
PATH = "$PATH:/somenewpath"
This says 'I want the existing path and this new path /somenewpath'
In this case he's saying 'I want the existing options in exceptions and I want the bad_bit option as well'
The std::ios::exceptions is a function which gets/sets an exception mask in the file which is used by the file object to decide in which situations it should throw an exception or not.
There exist two implementations of this function:
iostate exceptions() const; // get current bit mask
void exceptions (iostate except); // set new bit mask
The statement you've posted sets the new exception mask to the file object using ios_base::badbit flag combined with the current flags, that are currently set in the file object.
The OR bitwise operator is often used in order to create create a bitfield using already existing bitfield and a new flag. It could also be used in order to combine two flags together into a new bitfield.
Here is an example with explanation:
// Enums are usually used in order to represent
// the bitfields flags, but you can just use the
// constant integer values.
// std::ios::bad_bit is, actually, just a constant integer.
enum Flags {
A,
B,
C
};
// This function is similar to std::ios::exceptions
// in the sense that it returns a bitfield (integer,
// in which bits are manipulated directly).
Something foo() {
// Return a bitfield in which A and B flags
// are "on".
return A | B;
}
int main() {
// The actual bitfield, which is represented as a 32-bit integer
int bf = 0;
// This is what you've seen (well, somethng similar).
// So, we're assigning a new bitfield to the variable bf.
// The new bitfield consists of the flags which are enabled
// in the bitfield which foo() returns and the C flag.
bf = foo() | C;
return 0;
}
Here's my issue: I need to pass back two uint32_t's via a single uint32_t (because of how the API is set up...). I can hard code whatever other values I need to reverse the operation, but the parameter passed between functions needs to stay a single uint32_t.
This would be trivial if I could just bit-shift the two 32-bit ints into a single 64-bit int (like what was explained here), but the compiler wouldn't like that. I've also seen mathematical pairing functions, but I'm not sure if that's what I need in this case.
I've thought of setting up a simple cipher: the unint32_t could be the cipher text, and I could just hard code the key. This is an example, but that seems like overkill.
Is this even possible?
It is not possible to store more than 32 bits of information using only 32 bits. This is a basic result of information theory.
If you know that you're only using the low-order 16 bits of each value, you could shift one left 16 bits and combine them that way. But there's absolutely no way to get 64 bits worth of information (or even 33 bits) into 32 bits, period.
Depending on how much trouble this is really worth, you could:
create a global array or vector of std::pair<uint32_t,uint32_t>
pass an index into the function, then your "reverse" function just looks up the result in the array.
write some code to decide which index to use when you have a pair to pass. The index needs to not be in use by anyone else, and since the array is global there may be thread-safety issues. Essentially what you are writing is a simple memory allocator.
As a special case, on a machine with 32 bit data pointers you could allocate the struct and reinterpret_cast the pointer to and from uint32_t. So you don't need any globals.
Beware that you need to know whether or not the function you pass the value into might store the value somewhere to be "decoded" later, in which case you have a more difficult resource-management problem than if the function is certain to have finished using it by the time it returns.
In the easy case, and if the code you're writing doesn't need to be re-entrant at all, then you only need to use one index at a time. That means you don't need an array, just one pair. You could pass 0 to the function regardless of the values, and have the decoder ignore its input and look in the global location.
If both special cases apply (32 bit and no retaining of the value), then you can put the pair on the stack, and use no globals and no dynamic allocation even if your code does need to be re-entrant.
None of this is really recommended, but it could solve the problem you have.
You can use an intermediate global data structure to store the pair of uint32_t on it, using your only uint32_t parameter as the index on the structure:
struct my_pair {
uint32_t a, b;
};
std::map<uint32_t, my_pair> global_pair_map;
uint32_t register_new_pair(uint32_t a, uint32_t b) {
// Add the pair of (a, b) to the map global_pair_map on a new key, and return the
// new key value.
}
void release_pair(uint32_t key) {
// Remove the key from the global_pair_map.
}
void callback(uint32_t user_data) {
my_pair& p = global_pair_map[user_data];
// Use your pair of uint32_t with p.a, and p.b.
}
void main() {
uint32_t key = register_new_pair(number1, number2);
register_callback(callback, key);
}
I'm writing an application and I had to do some pointers arithmetic. However this application will be running on different architecture! I was not really sure if this would be problematic but after reading this article, I thought that I must change it.
Here was my original code that I didn't like much:
class Frame{
/* ... */
protected:
const u_char* const m_pLayerHeader; // Where header of this layer starts
int m_iHeaderLength; // Length of the header of this layer
int m_iFrameLength; // Header + payloads length
};
/**
* Get the pointer to the payload of the current layer
* #return A pointer to the payload of the current layer
*/
const u_char* Frame::getPayload() const
{
// FIXME : Pointer arithmetic, portability!
return m_pLayerHeader + m_iHeaderLength;
}
Pretty bad isn't it! Adding an int value to a u_char pointer! But then I changed to this:
const u_char* Frame::getPayload() const
{
return &m_pLayerHeader[m_iHeaderLength];
}
I think now, the compiler is able to say how much to jump! Right? Is the operation [] on array considered as pointer arithmetic? Does it fix the portability problem?
p + i and &p[i] are synonyms when p is a pointer and i a value of integral type. So much that you can even write &i[p] and it's still valid (just as you can write i + p).
The portability issue in the example you link was coming from sizeof(int) varying across platforms. Your code is just fine, assuming m_iHeaderLength is the number of u_chars you want to skip.
In your code you are advancing the m_pLayerHeader by m_iHeaderLength u_chars. As long as whatever wrote the data you are pointing into has the same size for u_char, and i_HeaderLength is the number of u_chars in the header area you are safe.
But if m_iHeaderLength is really referring to bytes, and not u_chars, then you may have a problem if m_iHeaderLength is supposed to advance the pointer past other types than char.
Say you are sending data from a 16-bit system to a 32-bit system, your header area is defined like this
struct Header {
int something;
int somethingElse;
};
Assume that is only part of the total message defined by the struct Frame.
On the 32-bit machine you write the data out to a port that the 16-bit machine will read from.
port->write(myPacket, sizeof(Frame));
On the 16-bit machine you have the same Header definition, and try to read the information.
port->read(packetBuffer, sizeof(Frame));
You are already in trouble because you've tried to read twice the amount of data the sender wrote. The size of int on the 16-bit machine doing the reading is two, and the size of the header is four. But the header size was eight on the sending machine, two ints of four bytes each.
Now you attempt to advance your pointer
m_iHeaderLength = sizeof(Header);
...
packetBuffer += m_iHeaderLength;
packetBuffer will still be pointing into the data which was in the header in the frame sent from the originator.
If there is a portability problem, then no, that wouldn't fix it. m_pLayerHeader + m_iHeaderLength and &m_pLayerHeader[m_iHeaderLength] are completely equivalent (in this case).
I'm pulling my hair out trying to figure out how to read bytes off a serial device, check a checksum, and then convert them into something that I can actually read.
I have a device which "should" be sending me various messages, each started with the byte $83 and ended with the byte $84. The second to last byte is supposedly a checksum, generated by XORign all the other values together and comparing.
The actual values coming back should be alphanumeric, but I can't make heads or tail of the data. I'm newish to C++ - I'm sure that's not helping.
I've read several guides on serial programming, but I'm lost.
Can anyone help me, link me, or show me how to read bytes off a serial device, watch for $83 and $84, and then make sense of the data in between?
Here is the format of each message:
$FF byte Destination Address
$10 byte Message Length 16 Bytes
$37 byte Message Type
$00 byte Message subtype
BankAngle int -179 to +180
PitchAngle int -90 to +90
YawAngle int -179 to +180
Slip sint -50 to +50
GForce fps 0 to 6G
MISC byte Mode bits
Heading word 0 to 359
N/A not used
Voltage byte input voltage
This is all coming off an MGL SP-4 AHRS, and for ease of use I am targeting a Linux system, specifically Ubuntu. I am using the GCC compiler end the Eclipse CDT for development.
Where I'm lost
I can read the data into a buffer, but then I'm not versed enough in C++ to make sense of it after that, since it's not ASCII. I'm interested in learning what I need to know, but I don't know what I need to know.
I have a Perl / Java background.
Accomplishing this is going to be wholly dependent on the Operating System and platform that you target. Since the device you mention is mounted internally to an aircraft in the general use-case, I will assume you are not targeting a Windows platform, but more likely a Linux or embedded system. There are a number of resources available for performing serial I/O on such platforms (for example: the Serial Programming HOW-TO) that you should look at. Additionally, as suggested in the device's Installation Manual (available here about halfway down the page), you should "Consult the SP-4 OEM manual for message formats and message type selection." I suspect you will obtain the most relevant and useful information from that document. You may want to check if the manufacturer provides an API for your platform, as that would negate the need for you to implement the actual communication routine.
As far as making sense of the data, once you can read bytes from your serial interface, you can leverage structs and unions to make accessing your data more programmer-friendly. For the rough message outline you provided, something like this might be appropriate:
struct _message
{
uint8_t DestinationAddress;
uint8_t MessageLength;
uint8_t MessageType;
uint8_t MessageSubtype;
int32_t BankAngle; //assuming an int is 32 bits
int32_t PitchAngle;
int32_t YawAngle;
sint_t Slip; //not sure what a 'sint' is
fps_t GForce; //likewise 'fps'
uint8_t MISC;
uint16_t Heading; //assuming a word is 16 bits
uint8_t Unused[UNUSED_BYTES]; //however many there are
uintt_t Voltage;
}
struct myMessage
{
union
{
char raw[MAX_MESSAGE_SIZE]; //sizeof(largest possible message)
struct _message message;
}
}
This way, if you were to declare struct myMessage serialData;, you can read your message into serialData.raw, and then conveniently access its members (e.g. serialData.message.DestinationAddress).
Edit: In response to your edit, I'll provide an example of how to make sense of your data. This example supposes there is only one message type you have to worry about, but it can be easily extended to other types.
struct myMessage serialData;
memcpy(serialData.raw, serialDataBuffer, MAX_MESSAGE_SIZE); //copy data from your buffer
if(serialData.message.MessageType == SOME_MESSAGE_TYPE)
{
//you have usable data here.
printf("I am a SOME_MESSAGE!\n");
}
Now, supposing that these integral types are really only useful for data transmission, you need to translate these bits into "usable data". Say one of these fields is actually an encoded floating-point number. One common scheme is to select a bit-weight (sometimes also called resolution). I don't know if this is directly applicable to your device, or if it is what the real values are, but let's say for the sake of discussion, that the YawAngle field had a resolution of 0.00014 degrees/bit. To translate the value in your message (serialData.message.YawAngle) from its uint32_t value to a double, for example, you might do this:
double YawAngleValue = 0.00014 * serialData.message.YawAngle;
...and that's about it. The OEM manual should tell you how the data is encoded, and you should be able to work out how to decode it from there.
Now, let's say you've got two message types to handle. The one I've already shown you, and a theoretical CRITICAL_BITS message. To add that type using the scheme I've laid out, you would first define the CRITICAL_BITS structure (perhaps as follows):
struct _critical_bits
{
uint8_t DestinationAddress;
uint8_t MessageLength;
uint8_t MessageType;
uint8_t MessageSubtype;
uint32_t SomeCriticalData;
}
...and then add it to the struct myMessage definition like so:
struct myMessage
{
union
{
char raw[MAX_MESSAGE_SIZE]; //sizeof(largest possible message)
struct _message message;
struct _critical_bits critical_message;
}
}
...then you can access the SomeCriticalData just like the other fields.
if(serialData.message.MessageType == CRITICAL_MESSAGE_TYPE)
{
uint32_t critical_bits = serialData.critical_message.SomeCriticalData;
}
You can find a little more information on how this works by reading about structs. Bear in mind, that instances of the struct myMessage type will only ever contain one set of meaningful data at a time. Put more simply, if serialData contains CRITICAL_MESSAGE_TYPE data, then the data in serialData.critical_message is valid, but serialData.message is not --even though the language does not prevent you from accessing that data if you request it.
Edit: One more example; to calculate the checksum of a message, using the algorithm you've specified, you would probably want something like this (assuming you already know the message is completely within the buffer):
uint8_t calculate_checksum(struct myMessage *data)
{
uint8_t number_bytes = data->message.MessageLength;
uint8_t checksum = 0;
int i;
for(i=0; i<number_bytes; ++i)
{
//this performs a XOR with checksum and the byte
//in the message at offset i
checksum ^= data->raw[i];
}
return checksum;
}
You might need to adjust that function for bytes that aren't included, check to make sure that data != NULL, etc. but it should get you started.