Error[Lp001] running out of memory when I shouldn't - c++

Im currently working on a project which uses different language settings. To solve this a table is used to store all the texts in different languages that are used in the program. So whenever a text is about to be written on the screen this table is called and depending on what the current language setting is a text string is returned. I recently joined this project and I noticed that the way of storing this wasn't very optimized and for every new language that was added the time it would take to look up the correct string would increase. I therefore came up with a (in my mind) better solution. However, when I tried to implement it I ran into the problem of getting an error that too much memory is used and I don't understand why. I am using IAR enbedded workbench.
The original solution in pseudo/c++ code:
typedef struct
{
enum textId;
enum language;
string textString;
} Text;
static const Text s_TextMap[] =
{
{ TextId::RESTORE_DATA_Q ,Language::ENGLISH ,"Restore Data?" },
{ TextId::RESTORE_DATA_Q ,Language::SWEDISH ,"Återställa data?" },
{ TextId::RESTORE_DATA_Q ,Language::GERMAN ,"Wiederherstellen von Daten?" },
{ TextId::CHANGE_LANGUAGE ,Language::ENGLISH ,"Change Language" },
{ TextId::CHANGE_LANGUAGE ,Language::SWEDISH ,"Välj språk" },
{ TextId::CHANGE_LANGUAGE ,Language::GERMAN ,"Sprache wählen" },
};
My solution in pseudo/c++ code:
typedef struct
{
const char* pEngText;
const char* pSweText;
const char* pGerText;
} Texts;
static Texts addTexts(const char* pEngText, const char* pSweText, const char* pGerText)
{
Texts t;
t.pEngText = pEngText;
t.pSweText = pSweText;
t.pGerText = pGerText;
return t;
}
typedef struct
{
enum textId;
Texts texts;
} Text;
static const TextTest s_TextMapTest[] =
{
{TextId::RESTORE_DATA_Q, addTexts("Restore Data?","Återställa data?","Wiederherstellen von Daten?")},
{TextId::CHANGE_LANGUAGE, addTexts("Change Language","Välj språk","Sprache wählen")},
};
My solution is obviously faster to lookup in the average case and based on my calculations it should also use less memory. When the full tables are used I've calculated that the original solution requires 7668 bytes and that my solution requires 4248 bytes. The way I did this was to implement the full tables in a small testprogram and use sizeof(s_TextMap). However, when I try to compile the code I get linking errors saying:
Error[Lp011]: section placement failed
unable to allocate space for sections/blocks with a total estimated minimum size of 0x130301 bytes (max align 0x1000) in <[0x0000a000-0x0007ffff]> (total uncommitted space 0x757eb).
Error[Lp011]: section placement failed
unable to allocate space for sections/blocks with a total estimated minimum size of 0x47de4 bytes (max align 0x20) in <[0x1fff0000-0x2000fff0]> (total uncommitted space 0x1fff1).
Error[Lp021]: the destination for compressed initializer batch "USER_DEFAULT_MEMORY-1" is placed at an address that is dependent on the size of the batch, which is not allowed when using lz77 compression. Consider using "initialize by copy with packing = zeros" (or none) instead.
Error[Lp021]: the destination for compressed initializer batch "USER_DEFAULT_MEMORY-1" is placed at an address that is dependent on the size of the batch, which is not allowed when using lz77 compression. Consider using "initialize by copy with packing = zeros" (or none) instead.
The error I'm most confused about is the first one which states that my code is estimated to take 0x130301 bytes of memory and I see no way this is possible. Could this be some bug in IAR or am I missing something?

In your solution s_TextMapTest[] will necessarily be located in RAM rather than ROM because the pointers are set at run-time - though it is not clear how you managed to use a function call as an array element initialiser. On most microcontrollers RAM is a far more limited resource. You have provided no information about the target or its memory map.
Either way you should check the memory map output from your linker to verify that data is located appropriately and where you expect it to be.
The original code and the solution you propose in your own answer is ROMable.
A solution that maintains the form of your original proposal while remaining ROMable is to write addTexts() as a macro rather than a function:
#define addTexts( eng, swe, ger, fin ) {eng, swe, ger, fin}
though I cannot see what the advantage is, and you are not really "adding" text - the text was always there by initialisation.

Ok, so I managed to get it to work. I remowed the extra struct and the functions and just simplified it to:
typedef struct
{
TextId::e textId;
const char* pEngText;
const char* pSweText;
const char* pGerText;
const char* pFinText;
} Text;
It doesn't look as good to me but at least it works

Related

sending IOKit command with dynamic length

I'm using IOKit framework to communicate with my driver using IOConnectCallMethod from the user-space client and IOExternalMethodDispatch on the driver side.
So far I was able to send fixed length commands, and now I wish to send a varied size array of chars (i.e. fullpath).
However, it seems that the driver and the client sides command lengths are coupled, which means that checkStructureInputSize from IOExternalMethodDispatch in driver must be equal to inputStructCnt from
IOConnectCallMethod in client side.
Here are the struct contents on both sides :
DRIVER :
struct IOExternalMethodDispatch
{
IOExternalMethodAction function;
uint32_t checkScalarInputCount;
uint32_t checkStructureInputSize;
uint32_t checkScalarOutputCount;
uint32_t checkStructureOutputSize;
};
CLIENT:
kern_return_t IOConnectCallMethod(
mach_port_t connection, // In
uint32_t selector, // In
const uint64_t *input, // In
uint32_t inputCnt, // In
const void *inputStruct, // In
size_t inputStructCnt, // In
uint64_t *output, // Out
uint32_t *outputCnt, // In/Out
void *outputStruct, // Out
size_t *outputStructCnt) // In/Out
Here's my failed attempt to use a varied size command :
std::vector<char> rawData; //vector of chars
// filling the vector with filePath ...
kr = IOConnectCallMethod(_connection, kCommandIndex , 0, 0, rawData.data(), rawData.size(), 0, 0, 0, 0);
And from the driver command handler side, I'm calling IOUserClient::ExternalMethod with IOExternalMethodArguments *arguments and IOExternalMethodDispatch *dispatch but this requires the exact length of data I'm passing from the client which is dynamic.
this doesn't work unless I set the dispatch function with the exact length of data it should expect.
Any idea how to resolve this or perhaps there's a different API I should use in this case ?
As you have already discovered, the answer for accepting variable-length "struct" inputs and outputs is to specify the special kIOUCVariableStructureSize value for input or output struct size in the IOExternalMethodDispatch.
This will allow the method dispatch to succeed and call out to your method implementation. A nasty pitfall however is that structure inputs and outputs are not necessarily provided via the structureInput and structureOutput pointer fields in the IOExternalMethodArguments structure. In the struct definition (IOKit/IOUserClient.h), notice:
struct IOExternalMethodArguments
{
…
const void * structureInput;
uint32_t structureInputSize;
IOMemoryDescriptor * structureInputDescriptor;
…
void * structureOutput;
uint32_t structureOutputSize;
IOMemoryDescriptor * structureOutputDescriptor;
…
};
Depending on the actual size, the memory region might be referenced by structureInput or structureInputDescriptor (and structureOutput or structureOutputDescriptor) - the crossover point has typically been 8192 bytes, or 2 memory pages. Anything smaller will come in as a pointer, anything larger will be referenced by a memory descriptor. Don't count on a specific crossover point though, that's an implementation detail and could in principle change.
How you handle this situation depends on what you need to do with the input or output data. Usually though, you'll want to read it directly in your kext - so if it comes in as a memory descriptor, you need to map it into the kernel task's address space first. Something like this:
static IOReturn my_external_method_impl(OSObject* target, void* reference, IOExternalMethodArguments* arguments)
{
IOMemoryMap* map = nullptr;
const void* input;
size_t input_size;
if (arguments->structureInputDescriptor != nullptr)
{
map = arguments->structureInputDescriptor->createMappingInTask(kernel_task, 0, kIOMapAnywhere | kIOMapReadOnly);
if (map == nullptr)
{
// insert error handling here
return …;
}
input = reinterpret_cast<const void*>(map->getAddress());
input_size = map->getLength();
}
else
{
input = arguments->structureInput;
input_size = arguments->structureInputSize;
}
// …
// do stuff with input here
// …
OSSafeReleaseNULL(map); // make sure we unmap on all function return paths!
return …;
}
The output descriptor can be treated similarly, except without the kIOMapReadOnly option of course!
CAUTION: SUBTLE SECURITY RISK:
Interpreting user data in the kernel is generally a security-sensitive task. Until recently, the structure input mechanism was particularly vulnerable - because the input struct is memory-mapped from user space to kernel space, another userspace thread can still modify that memory while the kernel is reading it. You need to craft your kernel code very carefully to avoid introducing a vulnerability to malicious user clients. For example, bounds-checking a userspace-supplied value in mapped memory and then re-reading it under the assumption that it's still within the valid range is wrong.
The most straightforward way to avoid this is to make a copy of the memory once and then only use the copied version of the data. To take this approach, you don't even need to memory-map the descriptor: you can use the readBytes() member function. For large amounts of data, you might not want to do this for efficiency though.
Recently (during the 10.12.x cycle) Apple changed the structureInputDescriptor so it's created with the kIOMemoryMapCopyOnWrite option. (Which as far as I can tell was created specifically for this purpose.) The upshot of this is that if userspace modifies the memory range, it doesn't modify the kernel mapping but transparently creates copies of the pages it writes to. Relying on this assumes your user's system is fully patched up though. Even on a fully patched system, the structureOutputDescriptor suffers from the same issue, so treat it as write-only from the kernel's point of view. Never read back any data you wrote there. (Copy-on-write mapping makes no sense for the output struct.)
After going through the relevant manual again, I've found the relevant paragraph :
The checkScalarInputCount, checkStructureInputSize, checkScalarOutputCount, and checkStructureOutputSize fields allow for sanity-checking of the argument list before passing it along to the target object. The scalar counts should be set to the number of scalar (64-bit) values the target's method expects to read or write. The structure sizes should be set to the size of any structures the target's method expects to read or write. For either of the struct size fields, if the size of the struct can't be determined at compile time, specify kIOUCVariableStructureSize instead of the actual size.
So all I had to do in order to avoid the size verification, is to set the field checkStructureInputSize to value kIOUCVariableStructureSize in IoExternalMethodDispatch and the command passed to the driver properly.

buffer to struct conversion

I'm trying to convert a string to a structure.the struct in first field stores number of chars present in second field.
Please let me know what I'm missing in this program.
I'm getting output wrongly(some big integer value)
update: Can this program be corrected to print 4 (nsize) ?
#include <iostream>
using namespace std;
struct SData
{
int nsize;
char* str;
};
void main()
{
void* buffer = "4ABCD";
SData *obj = reinterpret_cast< SData*>(buffer);
cout<<obj->nsize;
}
Your approach is utterly wrong. First of all binary representation of integer depends on platform, ie sizeof of int and endiannes of hardware. Second, you will not be able to populate char pointer this way, so you need to create some marshalling code that reads bytes according to format, convert them to int and then allocate memory and copy the rest there. Simple approach with casting block of memory to your struct will not work with this structure.
In an SData object, an integer occupies four bytes. Your buffer uses one byte. Further, a character '4' is different from a binary form of an integer 4.
if you want to make an ASCII representation of a piece of data then , yes, you need to do serialization. This is not simply a matter of hoping that a human readable version of what you think of as the contents of a struct can simply be cast to that data. You have to choose a serialization format then either write code to do it or use an existing library.
Popular Choices:
xml
json
yaml
I would use json - google for "c++ json library"

How to use inplace const char* as std::string content

I am working on a embedded SW project. A lot of strings are stored inside flash memory. I would use these strings (usually const char* or const wchar*) as std::string's data. That means I want to avoid creating a copy of the original data because of memory restrictions.
An extended use might be to read the flash data via stringstream directly out of the flash memory.
Example which unfortunately is not working in place:
const char* flash_adr = 0x00300000;
size_t length = 3000;
std::string str(flash_adr, length);
Any ideas will be appreciated!
If you are willing to go with compiler and library specific implementations, here is an example that works in MSVC 2013.
#include <iostream>
#include <string>
int main() {
std::string str("A std::string with a larger length than yours");
char *flash_adr = "Your source from the flash";
char *reset_adr = str._Bx._Ptr; // Keep the old address around
// Change the inner buffer
(char*)str._Bx._Ptr = flash_adr;
std::cout << str << std::endl;
// Reset the pointer or the program will crash
(char*)str._Bx._Ptr = reset_adr;
return 0;
}
It will print Your source from the flash.
The idea is to reserve a std::string capable of fitting the strings in your flash and keep on changing its inner buffer pointer.
You need to customize this for your compiler and as always, you need to be very very careful.
I have now used string_span described in CPP Core Guidelines (https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md). GSL provides a complete implementation (GSL: Guidelines Support Library https://github.com/Microsoft/GSL).
If you know the address of your string inside flash memory you can just use the address directly with the following constructor to create a string_span.
constexpr basic_string_span(pointer ptr, size_type length) noexcept
: span_(ptr, length)
{}
std::string_view might have done the same job as Captain Obvlious (https://stackoverflow.com/users/845568/captain-obvlious) commented as my favourite comment.
I am quite happy with the solution. It works good from performance side including providing a good readability.

Parsing a binary file. What is a modern way?

I have a binary file with some layout I know. For example let format be like this:
2 bytes (unsigned short) - length of a string
5 bytes (5 x chars) - the string - some id name
4 bytes (unsigned int) - a stride
24 bytes (6 x float - 2 strides of 3 floats each) - float data
The file should look like (I added spaces for readability):
5 hello 3 0.0 0.1 0.2 -0.3 -0.4 -0.5
Here 5 - is 2 bytes: 0x05 0x00. "hello" - 5 bytes and so on.
Now I want to read this file. Currently I do it so:
load file to ifstream
read this stream to char buffer[2]
cast it to unsigned short: unsigned short len{ *((unsigned short*)buffer) };. Now I have length of a string.
read a stream to vector<char> and create a std::string from this vector. Now I have string id.
the same way read next 4 bytes and cast them to unsigned int. Now I have a stride.
while not end of file read floats the same way - create a char bufferFloat[4] and cast *((float*)bufferFloat) for every float.
This works, but for me it looks ugly. Can I read directly to unsigned short or float or string etc. without char [x] creating? If no, what is the way to cast correctly (I read that style I'm using - is an old style)?
P.S.: while I wrote a question, the more clearer explanation raised in my head - how to cast arbitrary number of bytes from arbitrary position in char [x]?
Update: I forgot to mention explicitly that string and float data length is not known at compile time and is variable.
If it is not for learning purpose, and if you have freedom in choosing the binary format you'd better consider using something like protobuf which will handle the serialization for you and allow to interoperate with other platforms and languages.
If you cannot use a third party API, you may look at QDataStream for inspiration
Documentation
Source code
The C way, which would work fine in C++, would be to declare a struct:
#pragma pack(1)
struct contents {
// data members;
};
Note that
You need to use a pragma to make the compiler align the data as-it-looks in the struct;
This technique only works with POD types
And then cast the read buffer directly into the struct type:
std::vector<char> buf(sizeof(contents));
file.read(buf.data(), buf.size());
contents *stuff = reinterpret_cast<contents *>(buf.data());
Now if your data's size is variable, you can separate in several chunks. To read a single binary object from the buffer, a reader function comes handy:
template<typename T>
const char *read_object(const char *buffer, T& target) {
target = *reinterpret_cast<const T*>(buffer);
return buffer + sizeof(T);
}
The main advantage is that such a reader can be specialized for more advanced c++ objects:
template<typename CT>
const char *read_object(const char *buffer, std::vector<CT>& target) {
size_t size = target.size();
CT const *buf_start = reinterpret_cast<const CT*>(buffer);
std::copy(buf_start, buf_start + size, target.begin());
return buffer + size * sizeof(CT);
}
And now in your main parser:
int n_floats;
iter = read_object(iter, n_floats);
std::vector<float> my_floats(n_floats);
iter = read_object(iter, my_floats);
Note: As Tony D observed, even if you can get the alignment right via #pragma directives and manual padding (if needed), you may still encounter incompatibility with your processor's alignment, in the form of (best case) performance issues or (worst case) trap signals. This method is probably interesting only if you have control over the file's format.
Currently I do it so:
load file to ifstream
read this stream to char buffer[2]
cast it to unsigned short: unsigned short len{ *((unsigned short*)buffer) };. Now I have length of a string.
That last risks a SIGBUS (if your character array happens to start at an odd address and your CPU can only read 16-bit values that are aligned at an even address), performance (some CPUs will read misaligned values but slower; others like modern x86s are fine and fast) and/or endianness issues. I'd suggest reading the two characters then you can say (x[0] << 8) | x[1] or vice versa, using htons if needing to correct for endianness.
read a stream to vector<char> and create a std::string from this vector. Now I have string id.
No need... just read directly into the string:
std::string s(the_size, ' ');
if (input_fstream.read(&s[0], s.size()) &&
input_stream.gcount() == s.size())
...use s...
the same way read next 4 bytes and cast them to unsigned int. Now I have a stride.
while not end of file read floats the same way - create a char bufferFloat[4] and cast *((float*)bufferFloat) for every float.
Better to read the data directly over the unsigned ints and floats, as that way the compiler will ensure correct alignment.
This works, but for me it looks ugly. Can I read directly to unsigned short or float or string etc. without char [x] creating? If no, what is the way to cast correctly (I read that style I'm using - is an old style)?
struct Data
{
uint32_t x;
float y[6];
};
Data data;
if (input_stream.read((char*)&data, sizeof data) &&
input_stream.gcount() == sizeof data)
...use x and y...
Note the code above avoids reading data into potentially unaligned character arrays, wherein it's unsafe to reinterpret_cast data in a potentially unaligned char array (including inside a std::string) due to alignment issues. Again, you may need some post-read conversion with htonl if there's a chance the file content differs in endianness. If there's an unknown number of floats, you'll need to calculate and allocate sufficient storage with alignment of at least 4 bytes, then aim a Data* at it... it's legal to index past the declared array size of y as long as the memory content at the accessed addresses was part of the allocation and holds a valid float representation read in from the stream. Simpler - but with an additional read so possibly slower - read the uint32_t first then new float[n] and do a further read into there....
Practically, this type of approach can work and a lot of low level and C code does exactly this. "Cleaner" high-level libraries that might help you read the file must ultimately be doing something similar internally....
I actually implemented a quick and dirty binary format parser to read .zip files (following Wikipedia's format description) just last month, and being modern I decided to use C++ templates.
On some specific platforms, a packed struct could work, however there are things it does not handle well... such as fields of variable length. With templates, however, there is no such issue: you can get arbitrarily complex structures (and return types).
A .zip archive is relatively simple, fortunately, so I implemented something simple. Off the top of my head:
using Buffer = std::pair<unsigned char const*, size_t>;
template <typename OffsetReader>
class UInt16LEReader: private OffsetReader {
public:
UInt16LEReader() {}
explicit UInt16LEReader(OffsetReader const or): OffsetReader(or) {}
uint16_t read(Buffer const& buffer) const {
OffsetReader const& or = *this;
size_t const offset = or.read(buffer);
assert(offset <= buffer.second && "Incorrect offset");
assert(offset + 2 <= buffer.second && "Too short buffer");
unsigned char const* begin = buffer.first + offset;
// http://commandcenter.blogspot.fr/2012/04/byte-order-fallacy.html
return (uint16_t(begin[0]) << 0)
+ (uint16_t(begin[1]) << 8);
}
}; // class UInt16LEReader
// Declined for UInt[8|16|32][LE|BE]...
Of course, the basic OffsetReader actually has a constant result:
template <size_t O>
class FixedOffsetReader {
public:
size_t read(Buffer const&) const { return O; }
}; // class FixedOffsetReader
and since we are talking templates, you can switch the types at leisure (you could implement a proxy reader which delegates all reads to a shared_ptr which memoizes them).
What is interesting, though, is the end-result:
// http://en.wikipedia.org/wiki/Zip_%28file_format%29#File_headers
class LocalFileHeader {
public:
template <size_t O>
using UInt32 = UInt32LEReader<FixedOffsetReader<O>>;
template <size_t O>
using UInt16 = UInt16LEReader<FixedOffsetReader<O>>;
UInt32< 0> signature;
UInt16< 4> versionNeededToExtract;
UInt16< 6> generalPurposeBitFlag;
UInt16< 8> compressionMethod;
UInt16<10> fileLastModificationTime;
UInt16<12> fileLastModificationDate;
UInt32<14> crc32;
UInt32<18> compressedSize;
UInt32<22> uncompressedSize;
using FileNameLength = UInt16<26>;
using ExtraFieldLength = UInt16<28>;
using FileName = StringReader<FixedOffsetReader<30>, FileNameLength>;
using ExtraField = StringReader<
CombinedAdd<FixedOffsetReader<30>, FileNameLength>,
ExtraFieldLength
>;
FileName filename;
ExtraField extraField;
}; // class LocalFileHeader
This is rather simplistic, obviously, but incredibly flexible at the same time.
An obvious axis of improvement would be to improve chaining since here there is a risk of accidental overlaps. My archive reading code worked the first time I tried it though, which was evidence enough for me that this code was sufficient for the task at hand.
I had to solve this problem once. The data files were packed FORTRAN output. Alignments were all wrong. I succeeded with preprocessor tricks that did automatically what you are doing manually: unpack the raw data from a byte buffer to a struct. The idea is to describe the data in an include file:
BEGIN_STRUCT(foo)
UNSIGNED_SHORT(length)
STRING_FIELD(length, label)
UNSIGNED_INT(stride)
FLOAT_ARRAY(3 * stride)
END_STRUCT(foo)
Now you can define these macros to generate the code you need, say the struct declaration, include the above, undef and define the macros again to generate unpacking functions, followed by another include, etc.
NB I first saw this technique used in gcc for abstract syntax tree-related code generation.
If CPP is not powerful enough (or such preprocessor abuse is not for you), substitute a small lex/yacc program (or pick your favorite tool).
It's amazing to me how often it pays to think in terms of generating code rather than writing it by hand, at least in low level foundation code like this.
You should better declare a structure (with 1-byte padding - how - depends on compiler). Write using that structure, and read using same structure. Put only POD in structure, and hence no std::string etc. Use this structure only for file I/O, or other inter-process communication - use normal struct or class to hold it for further use in C++ program.
Since all of your data is variable, you can read the two blocks separately and still use casting:
struct id_contents
{
uint16_t len;
char id[];
} __attribute__((packed)); // assuming gcc, ymmv
struct data_contents
{
uint32_t stride;
float data[];
} __attribute__((packed)); // assuming gcc, ymmv
class my_row
{
const id_contents* id_;
const data_contents* data_;
size_t len;
public:
my_row(const char* buffer) {
id_= reinterpret_cast<const id_contents*>(buffer);
size_ = sizeof(*id_) + id_->len;
data_ = reinterpret_cast<const data_contents*>(buffer + size_);
size_ += sizeof(*data_) +
data_->stride * sizeof(float); // or however many, 3*float?
}
size_t size() const { return size_; }
};
That way you can use Mr. kbok's answer to parse correctly:
const char* buffer = getPointerToDataSomehow();
my_row data1(buffer);
buffer += data1.size();
my_row data2(buffer);
buffer += data2.size();
// etc.
I personally do it this way:
// some code which loads the file in memory
#pragma pack(push, 1)
struct someFile { int a, b, c; char d[0xEF]; };
#pragma pack(pop)
someFile* f = (someFile*) (file_in_memory);
int filePropertyA = f->a;
Very effective way for fixed-size structs at the start of the file.
Use a serialization library. Here are a few:
Boost serialization and Boost fusion
Cereal (my own library)
Another library called cereal (same name as mine but mine predates theirs)
Cap'n Proto
The Kaitai Struct library provides a very effective declarative approach, which has the added bonus of working across programming languages.
After installing the compiler, you will want to create a .ksy file that describes the layout of your binary file. For your case, it would look something like this:
# my_type.ksy
meta:
id: my_type
endian: be # for big-endian, or "le" for little-endian
seq: # describes the actual sequence of data one-by-one
- id: len
type: u2 # unsigned short in C++, two bytes
- id: my_string
type: str
size: 5
encoding: UTF-8
- id: stride
type: u4 # unsigned int in C++, four bytes
- id: float_data
type: f4 # a four-byte floating point number
repeat: expr
repeat-expr: 6 # repeat six times
You can then compile the .ksy file using the kaitai struct compiler ksc:
# wherever the compiler is installed
# -t specifies the target language, in this case C++
/usr/local/bin/kaitai-struct-compiler my_type.ksy -t cpp_stl
This will create a my_type.cpp file as well as a my_type.h file, which you can then include in your C++ code:
#include <fstream>
#include <kaitai/kaitaistream.h>
#include "my_type.h"
int main()
{
std::ifstream ifs("my_data.bin", std::ifstream::binary);
kaitai::kstream ks(&ifs);
my_type_t obj(&ks);
std::cout << obj.len() << '\n'; // you can now access properties of the object
return 0;
}
Hope this helped! You can find the full documentation for Kaitai Struct here. It has a load of other features and is a fantastic resource for binary parsing in general.
I use ragel tool to generate pure C procedural source code (no tables) for microcontrollers with 1-2K of RAM. It did not use any file io, buffering, and produces both easy to debug code and .dot/.pdf file with state machine diagram.
ragel can also output go, Java,.. code for parsing, but I did not use these features.
The key feature of ragel is the ability to parse any byte-build data, but you can't dig into bit fields. Other problem is ragel able to parse regular structures but has no recursion and syntax grammar parsing.

Storing dynamic length data 'inside' structure

Problem statement : User provides some data which I have to store inside a structure. This data which I receive come in a data structure which allows user to dynamically add data to it.
Requirement: I need a way to store this data 'inside' the structure, contiguously.
eg. Suppose user can pass me strings which I have to store. So I wrote something like this :
void pushData( string userData )
{
struct
{
string junk;
} data;
data.junk = userData;
}
Problem : When I do this kind of storage, actual data is not really stored 'inside' the structure because string is not POD. Similar problem comes when I receive vector or list.
Then I could do something like this :
void pushData( string userData )
{
struct
{
char junk[100];
} data;
// Copy userdata into array junk
}
This store the data 'inside' the structure, but then, I can't put an upper limit on the size of string user can provide.
Can someone suggest some approach ?
P.S. : I read something about serializability, but couldnt really make out clearly if it could be helpful in my case. If it is the way to go forward, can someone give idea how to proceed with it ?
Edit :
No this is not homework.
I have written an implementation which can pass this kind of structure over message queues. It works fine with PODs, but I need to extend it to pass on dynamic data as well.
This is how message queue takes data:
i. Give it a pointer and tell the size till which it should read and transfer data.
ii. For plain old data types, data is store inside the structure, I can easily pass on the pointer of this structure to message queue to other processes.
iii. But in case of vector/string/list etc, actual data is not inside the structure and thus if I pass on the pointer of this structure, message queue will not really pass on the actual data, but rather the pointers which would be stored inside this structure.
You can see this and this. I am trying to achieve something similar.
void pushData( string userData )
{
struct Data
{
char junk[1];
};
struct Data* data = malloc(userData.size() + 1);
memcpy(data->junk, userData.data(), userData.size());
data->junk[userData.size()] = '\0'; // assuming you want null termination
}
Here we use an array of length 1, but we allocate the struct using malloc so it can actually have any size we want.
You ostensibly have some rather artificial constraints, but to answer the question: for a single struct to contain a variable amount of data is not possible... the closest you can come is to have the final member be say char [1], put such a struct at the start of a variably-sized heap region, and use the fact that array indexing is not checked to access memory beyond that character. To learn about this technique, see http://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html (or the answer John Zwinck just posted)
Another approach is e.g. template <size_t N> struct X { char data_[size]; };, but each instantiation will be a separate struct type, and you can't pre-instantiate every size you might want at run-time (given you've said you don't want an upper bound). Even if you could, writing code that handles different instantiations as the data grows would be nightmarish, as would the code bloat caused.
Having a structure in one place with a string member with data in another place is almost always preferable to the hackery above.
Taking a hopefully-not-so-wild guess, I assume your interest is in serialising the object based on starting address and size, in some generic binary block read/write...? If so, that's still problematic even if your goal were satisfied, as you need to find out the current data size from somewhere. Writing struct-specific serialisation routines that incorporates the variable-length data on the heap is much more promising.
Simple solution:estimate max_size of data (ex 1000), to prevent memory leak(if free memory & malloc new size memory -> fragment memory) when pushData multiple called.
#define MAX_SIZE 1000
void pushData( string userData )
{
struct Data
{
char junk[MAX_SIZE];
};
memcpy(data->junk, userData.data(), userData.size());
data->junk[userData.size()] = '\0'; // assuming you want null termination
}
As mentioned by John Zwinck....you can use dynamic memory allocation to solve your problem.
void pushData( string userData )
{
struct Data
{
char *junk;
};
struct Data *d = calloc(sizeof(struct data), 1);
d->junk = malloc(strlen(userData)+1);
strcpy(d->junk, userdata);
}