I am using sqlite3 on an embedded system with Modbus. I need to pack the information from sqlite3's select statement results into an array of shorts to be able to pass over Modbus.
Currently, I am only using 2 data types from sqlite3 (TEXT and INT). I am trying to pack the results of each columns results into an array of shorts. For example:
typedef struct
{
short unitSN[4];
short unitClass[1];
}UnitSettings;
UnitSettings unitSettings;
// prepare and execute select statement for table, then put into structs members
s = sqlite3_prepare(db, sqlstmt, strlen(sqlstmt), &stmt, &pzTest);
s = sqlite3_step( stmt );
// I want to do something like this:
unitSettings.unitSN[] = sqlite3_column_text(stmt, 0);
unitSettings.unitClass[] = sqlite3_column_int(stmt, 1);
I was thinking about creating functions to convert from unsigned char* (result of sqlite3_column_text) to short array and int to short array. Is this the way to go about it? Or is there are proper way to cast these results on the fly?
Also, was thinking of making the structs match the sqlite3 table types for easy copying and then at the end, have a function to run through each structs members and convert it into an array of shorts at the end.
EDIT: I just read about unions within structs and I think this would be exactly what I need:
typedef struct
{
union
{
unsigned char* unitSN;
short unitSNArr[4];
}
union
{
int unitClass;
short unitClassArr[1];
}
}UnitSettings;
It says that now they both look at the same piece of memory but can read it in different ways, which is what I want. This would be much easier than any kind of converting right?
sqlite will not provide these conversions for you automatically. You'd have to do the conversions yourself.
I would just use plain text access, and then write free functions to translate access to the short. Something like this, not really sure what would interface would make the most sense per your access.
void read_short(const char* data, size_t index, short& val) {
val = *(reintepret_cast<short*>(&data[index*2]));
}
Maybe your use case already has them in arrays of shorts or something? I'd probably still only do one short per field if that's actually how you use the data.
Personally, I would just put them into the database as integers if you can help it. You'd have to write special tools just to look at the database values, which isn't exactly friendly for maintenance.
Related
I am writing a parser in C++ to parse a well defined binary file. I have declared all the required structs. And since only particular fields are of interest to me, so in my structs I have skipped non-required fields by creating char array of size equal to skipped bytes. So I am just reading the file in char array and casting the char pointer to my struct pointer. Now problem is that all data fields in that binary are in big endian order, so after typecasting I need to change the endianness of all the struct fields. One way is to do it manually for each and every field. But there are various structs with many fields, so it'll be very cumbersome to do it manually. So what's the best way to achieve this. And since I'll be parsing very huge such files (say in TB's), so I require a fast way to do this.
EDIT : I have use attribute(packed) so no need to worry about padding.
If you can do misaligned accesses with no penalty, and you don't mind compiler- or platform-specific tricks to control padding, this can work. (I assume you are OK with this since you mention __attribute__((packed))).
In this case the nicest approach is to write value wrappers for your raw data types, and use those instead of the raw type when declaring your struct in the first place. Remember the value wrapper must be trivial/POD-like for this to work. If you have a POSIX platform you can use ntohs/ntohl for the endian conversion, it's likely to be better optimized that whatever you write yourself.
If misaligned accesses are illegal or slow on your platform, you need to deserialize instead. Since we don't have reflection yet, you can do this with the same value wrappers (plus an Ignore<N> placeholder that skips N bytes for fields you're not interested), and declare them in a tuple instead of a struct - you can iterate over the members in a tuple and tell each to deserialize itself from the message.
One way to do that is combine C preprocessor with C++ operators. Write a couple of C++ classes like this one:
#include "immintrin.h"
class FlippedInt32
{
int value;
public:
inline operator int() const
{
return _bswap( value );
}
};
class FlippedInt64
{
__int64 value;
public:
inline operator __int64() const
{
return _bswap64( value );
}
};
Then,
#define int FlippedInt32
before including the header that define these structures. #undef immediately after the #include.
This will replace all int fields in the structures with FlippedInt32, which has the same size but returns flipped bytes.
If it’s your own structures which you can modify you don’t need the preprocessor part. Just replace the integers with the byte-flipping classes.
If you can come up with a list of offsets (in-bytes, relative to the top of the file) of the fields that need endian-conversion, as well as the size of those fields, then you could do all of the endian-conversion with a single for-loop, directly on the char array. E.g. something like this (pseudocode):
struct EndianRecord {
size_t offsetFromTop;
size_t fieldSizeInByes;
};
std::vector<EndianRecord> todoList;
// [populate the todo list here...]
char * rawData = [pointer to the raw data]
for (size_t i=0; i<todoList.size(); i++)
{
const EndianRecord & er = todoList[i];
ByteSwap(&rawData[er.offsetFromTop], er.fieldSizeBytes);
}
struct MyPackedStruct * data = (struct MyPackedStruct *) rawData;
// Now you can just read the member variables
// as usual because you know they are already
// in the correct endian-format.
... of course the difficult part is coming up with the correct todoList, but since the file format is well-defined, it should be possible to generate it algorithmically (or better yet, create it as a generator with e.g. a GetNextEndianRecord() method that you can call, so that you don't have to store a very large vector in memory)
I guess this should be possible. I tried but could not find a good answer. I basically want to make a dynamic structure. I want to read a file which tells me the datatypes that my structure will contain. Based on that values I want to build a structure. I want to use C++ for this. We can think of oracle as an example for this where we give a csv file and it recognizes what type they should be and makes columns of that particular datatype.
Can anyone please help me with this problem ?
An Update: I guess I should have added little bit of code to explain my problem statement. So here we go:
//PLC Data Block Sturcture.
//Todo: try to construct this structure from a file or something
struct MMSDataHeader{
bool bHeader_Trigger; //2
unsigned char MachineTimeStamp[8]; //8
std::string Header_MachineID; //12
std::string Header_Station; //12
int Header_MessageID; //2
int Header_MessageSequenceNo; //2
int Header_NumberOfProperties; //2
int MeasurementType; //2
bool Response_Acknowledge; //2
};
typedef struct MMSDataHeader MMSDataHeader;
int PLCBox::GetHeader(){
MMSDataHeader local_PLCData = { 0 };
int res = -1;
std::cout << "Reading Head :";
if ((p_s7Client_ == NULL)) {
std::cerr << "TSnap7Client is not connected.\n";
}
res = p_s7Client_->DBRead(nb_db_num_, 0, k_header_size, (void *)(&buffer_));
//synchronous mode: default mode
//inFile.read(buffer_, sizeof(buffer_));
memcpy(&local_PLCData.bHeader_Trigger, buffer_, 1);
memcpy(local_PLCData.MachineTimeStamp, buffer_ + 2, 8);
memcpy(&local_PLCData.Header_MachineID, buffer_ + 10, 12);
memcpy(&local_PLCData.Header_Station, buffer_ + 22, 12);
memcpy(&local_PLCData.Header_MessageID, buffer_ + 34, 2);
memcpy(&local_PLCData.Header_MessageSequenceNo, buffer_ + 36, 2);
memcpy(&local_PLCData.Header_NumberOfProperties, buffer_ + 39, 2);
memcpy(&local_PLCData.MeasurementType, buffer_ + 40, 2);
memcpy(&local_PLCData.Response_Acknowledge, buffer_ + 42, 1);
nb_props = local_PLCData.Header_NumberOfProperties;
_b_read_trigger = local_PLCData.bHeader_Trigger;
return local_PLCData.Header_NumberOfProperties;
}
This code works for me now and solves my purpose when I call GetHeader . As you all can see it is looking for exact bytes and structure from the PLCs. I want to make a system such that the structure can be made from a file such that onle a file should be replaced and then the system should work on its own. I think I can explore some things about factory Design pattern to do this. Right now I can determine the type of file and content of the file for my Data structure Construction. Has anyone done something similar on there side.
It is unlikely to be possible to build C++ data types at runtime because C++ is a statically typed language. You'd better use Python or another dynamically typed language. However, if you think about the problem as about a task of finding a value associated with a key, it is doable, but the resulting "structures" will be nowhere close as fast as statically defined C++ types.
For single-level structures (no sub-structures) you can use a whatever sort of key->value class you would want something along these lines: std::map<std::string, std::pair<type, std::uniq_ptr<...>>>. That is, keys as strings are mapped to pairs where the first pair member identifies the value's type, and the second holds a generic pointer to value itself. Provided that there is limited number of types you want to support, you can write a dynamic dispatcher that casts references to actual data types for stored values.
If the input types are allowed to contain sub-structures, things become more complicated. But given that languages for semi-structured data exist (see XML, JSON etc.), you can find a ready solution for arbitrarily complex structured data.
Well, you can almost always do something - but you might be using the wrong language. C++ is strongly typed, as Max is pointing out - other languages, for example, PHP are not - and can actually change type. What you could do, which is like rolling your own language, is to override your math operators like +, -, etc so that you have a base class which takes data - and determines if it is float,string, integer, etc and then you use the object like it was a number. They will all be objects in your structure, but overrides for math, etc. A lot of work to do this - but there are examples out there regarding overriding math operators, etc.
As the others said, it cannot be done, at least not in the way that you end up with a structure or class with the member variables with the correct types.
What you can do is parse the file and guess the data type from the format of the value, then create the corresponding type (you have to implement that for all the types you want to support), and finally define a data structure that stores all values with their different types (e.g. a vector with the position as index and using boost::any for the values).
As a side note:
Be aware that you can end up with different types for values at the same position, e.g. if a value is once written as '123' and once as '83.45'. The first value would probably stored as some kind of integer, while the second would result in a float or double.
I'm trying to convert a string to a structure.the struct in first field stores number of chars present in second field.
Please let me know what I'm missing in this program.
I'm getting output wrongly(some big integer value)
update: Can this program be corrected to print 4 (nsize) ?
#include <iostream>
using namespace std;
struct SData
{
int nsize;
char* str;
};
void main()
{
void* buffer = "4ABCD";
SData *obj = reinterpret_cast< SData*>(buffer);
cout<<obj->nsize;
}
Your approach is utterly wrong. First of all binary representation of integer depends on platform, ie sizeof of int and endiannes of hardware. Second, you will not be able to populate char pointer this way, so you need to create some marshalling code that reads bytes according to format, convert them to int and then allocate memory and copy the rest there. Simple approach with casting block of memory to your struct will not work with this structure.
In an SData object, an integer occupies four bytes. Your buffer uses one byte. Further, a character '4' is different from a binary form of an integer 4.
if you want to make an ASCII representation of a piece of data then , yes, you need to do serialization. This is not simply a matter of hoping that a human readable version of what you think of as the contents of a struct can simply be cast to that data. You have to choose a serialization format then either write code to do it or use an existing library.
Popular Choices:
xml
json
yaml
I would use json - google for "c++ json library"
On the official website, there is a nice and relatively comprehensive example of how one could use CapnProto for C++ serialisation. What is missing, is how to handle the second Blob type capnp::Data, as only capnp::Text is covered.
Just for completeness, here is what the Schema Language says about the blob type:
Blobs: Text, Data
...
Text is always UTF-8 encoded and NUL-terminated.
Data is a completely arbitrary sequence of bytes.
So, if I have the following schema
struct Tiding {
id #0 :Text;
payload #1 :Data;
}
I can start building my message like this
::capnp::MallocMessageBuilder message;
Tiding::Builder tiding = message.initRoot<Tiding>();
tiding.setId("1");
At this point I got stuck. I can't do this:
typedef unsigned char byte;
byte data[100];
... //populate the array
tiding.setPayload(data)
//error: no viable conversion from 'byte [100]' to '::capnp::Data::Reader'
So I mucked around a bit and saw that capnp::Data is wrapping kj::ArrayPtr<const byte>, but I was unable to somehow get a hold of an ArrayPtr, much less use it to set the Payload field for my message.
I saw that there is a way to set the default value for the type Data (i.e. payload #5 :Data = 0x"a1 40 33";), but the schema language doesn't really translate to C++ in this case, so that also didn't help me.
I'd be grateful if somebody could point out what I am missing here. Also, how would I do this if I had List(Data) instead of just Data as the Payload in my schema?
A kj::ArrayPtr is fundamentally a pair of a pointer and a size.
You can create one by calling kj::arrayPtr(), which takes two arguments: a pointer, and the array size. Example:
byte buffer[256];
kj::ArrayPtr<byte> bufferPtr = kj::arrayPtr(buffer, sizeof(buffer));
kj::ArrayPtr has begin() and end() methods which return pointers, and a size() method. So you can convert back to pointer/size like:
byte* ptr = bufferPtr.begin();
size_t size = bufferPtr.size();
Putting it all together, in your example, you want:
tiding.setPayload(kj::arrayPtr(data, sizeof(data)));
Problem statement : User provides some data which I have to store inside a structure. This data which I receive come in a data structure which allows user to dynamically add data to it.
Requirement: I need a way to store this data 'inside' the structure, contiguously.
eg. Suppose user can pass me strings which I have to store. So I wrote something like this :
void pushData( string userData )
{
struct
{
string junk;
} data;
data.junk = userData;
}
Problem : When I do this kind of storage, actual data is not really stored 'inside' the structure because string is not POD. Similar problem comes when I receive vector or list.
Then I could do something like this :
void pushData( string userData )
{
struct
{
char junk[100];
} data;
// Copy userdata into array junk
}
This store the data 'inside' the structure, but then, I can't put an upper limit on the size of string user can provide.
Can someone suggest some approach ?
P.S. : I read something about serializability, but couldnt really make out clearly if it could be helpful in my case. If it is the way to go forward, can someone give idea how to proceed with it ?
Edit :
No this is not homework.
I have written an implementation which can pass this kind of structure over message queues. It works fine with PODs, but I need to extend it to pass on dynamic data as well.
This is how message queue takes data:
i. Give it a pointer and tell the size till which it should read and transfer data.
ii. For plain old data types, data is store inside the structure, I can easily pass on the pointer of this structure to message queue to other processes.
iii. But in case of vector/string/list etc, actual data is not inside the structure and thus if I pass on the pointer of this structure, message queue will not really pass on the actual data, but rather the pointers which would be stored inside this structure.
You can see this and this. I am trying to achieve something similar.
void pushData( string userData )
{
struct Data
{
char junk[1];
};
struct Data* data = malloc(userData.size() + 1);
memcpy(data->junk, userData.data(), userData.size());
data->junk[userData.size()] = '\0'; // assuming you want null termination
}
Here we use an array of length 1, but we allocate the struct using malloc so it can actually have any size we want.
You ostensibly have some rather artificial constraints, but to answer the question: for a single struct to contain a variable amount of data is not possible... the closest you can come is to have the final member be say char [1], put such a struct at the start of a variably-sized heap region, and use the fact that array indexing is not checked to access memory beyond that character. To learn about this technique, see http://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html (or the answer John Zwinck just posted)
Another approach is e.g. template <size_t N> struct X { char data_[size]; };, but each instantiation will be a separate struct type, and you can't pre-instantiate every size you might want at run-time (given you've said you don't want an upper bound). Even if you could, writing code that handles different instantiations as the data grows would be nightmarish, as would the code bloat caused.
Having a structure in one place with a string member with data in another place is almost always preferable to the hackery above.
Taking a hopefully-not-so-wild guess, I assume your interest is in serialising the object based on starting address and size, in some generic binary block read/write...? If so, that's still problematic even if your goal were satisfied, as you need to find out the current data size from somewhere. Writing struct-specific serialisation routines that incorporates the variable-length data on the heap is much more promising.
Simple solution:estimate max_size of data (ex 1000), to prevent memory leak(if free memory & malloc new size memory -> fragment memory) when pushData multiple called.
#define MAX_SIZE 1000
void pushData( string userData )
{
struct Data
{
char junk[MAX_SIZE];
};
memcpy(data->junk, userData.data(), userData.size());
data->junk[userData.size()] = '\0'; // assuming you want null termination
}
As mentioned by John Zwinck....you can use dynamic memory allocation to solve your problem.
void pushData( string userData )
{
struct Data
{
char *junk;
};
struct Data *d = calloc(sizeof(struct data), 1);
d->junk = malloc(strlen(userData)+1);
strcpy(d->junk, userdata);
}