My goal is to read data held in binary file, decode them and stream to block’s output. Data are just sequence of floats. Moreover I also have a sampling rate, at which they were collected (it is constant for all data). I have C++ OOT module, based on source, I managed to decode file and now I have a vector of floats. How to put them to the block’s output as a stream of floats?
int reader_impl::work(int noutput_items,
gr_vector_const_void_star& input_items,
gr_vector_void_star& output_items)
{
auto out = static_cast<output_type*>(output_items[0]);
auto size = noutput_items;
static size_t current_data_item = 0;
for (int i = 0; i < std::min(noutput_items, static_cast<int>(data.size())); ++i) {
out[i] = static_cast<float>(data[current_data_item]);
current_data_item++;
if (current_data_item == data.size()) {
current_data_item = 0;
}
}
return noutput_items;
}
Related
I have a need to write > 2GB to a an instance of flatbufferBuilder class. The following question on StackOverflow briefly covers this topic. I am looking for options to bypass this by considering the advice mentioned in the linked question which is to Create a sequence of flatbuffers
//flatbuffer struct contains a vector that can grow ~3.5 million in size(which consequently exceeds 2GB)
struct ValidationReportT : public flatbuffers::NativeTable
{
std::vector<flatbuffers::unique_ptr<ValidationDefectT>> defects{};
}
//The code snippet below is my attempt to create a sequence of flatbufferBuilder instances.
void createReport(std::vector<flatbuffers::FlatBufferBuilder>& fbbDefectsVec)
{
ValidationReportT report; //report has all defects already populated
ValidationReportT partialReport; //Partial report captures blocks of max_elems, writes to flatbufferBuilder class, then the partialReport defects vector is cleared
if (report.defects.size())
{
int size = report.defects.size();
auto elem_size = sizeof(*report.defects[0]);
float32_t max_elems = FLATBUFFERS_MAX_BUFFER_SIZE / (float32_t)elem_size;
auto fbbsNeeded = size < max_elems ? 1 : (int)ceil(size / max_elems);
fbbDefectsVec.resize(fbbsNeeded);
int offset = 0;
int idx = 0;
size_t start = offset;
size_t end = offset + max_elems; //Loop over blocks of max elems
while (size > max_elems)
{
for (size_t i = start; i < end; i++)
{
partialReport.defects.push_back(std::move(report.defects[i]));
}
offset += max_elems;
size -= max_elems;
fbbDefectsVec[idx].Finish(
validation::ValidationReport::Pack(fbbDefectsVec[idx], &partialReport));
idx++;
start = offset;
end = offset + max_elems;
partialReport.defects.clear();
}
if(size > 0)
{
//Copy the remaining defects
//Set the loop bounds
if(max_elems >= report.defects.size()) //All defects can be fit into a flatbuffer vector
{
start = 0;
end = report.defects.size();
}
else
{
start = end;
end = report.defects.size();
}
for(size_t i = start; i < end; i++)
{
partialReport.defects.push_back(std::move(report.defects[i]));
}
fbbDefectsVec[idx].Finish(
validation::ValidationReport::Pack(fbbDefectsVec[idx], &partialReport));
}
}
}
Even though partialReport holds only the maximum allowed element count based on FLATBUFFERS_MAX_BUFFER_SIZE, I still get the size limit assertion failed:
Assertion `size() < FLATBUFFERS_MAX_BUFFER_SIZE' failed.
Aborted
On this particular line
fbbDefectsVec[idx].Finish(
validation::ValidationReport::Pack(fbbDefectsVec[idx], &partialReport));
Why is this so? And how do I bypass this?
I have a text file that has values and I want to put them into a 2D vector.
I can do it with arrays but I don't know how to do it with vectors.
The vector size should be like vector2D[nColumns][nLines] that I don't know in advance. At the most I can have in the text file the number of columns, but not the number of lines.
The number of columns could be different, from one .txt file to another.
.txt example:
189.53 -1.6700 58.550 33.780 58.867
190.13 -3.4700 56.970 42.190 75.546
190.73 -1.3000 62.360 34.640 56.456
191.33 -1.7600 54.770 35.250 65.470
191.93 -8.7500 58.410 33.900 63.505
with arrays I do it like this:
//------ Declares Array for values ------//
const int nCol = countCols; // read from file
float values[nCol][nLin];
// Fill Array with '-1'
for (int c = 0; c < nCol; c++) {
for (int l = 0; l < nLin; l++) {
values[c][l] = -1;
}
}
// reads file to end of *file*, not line
while (!inFile.eof()) {
for (int y = 0; y < nLin; y++) {
for (int i = 0; i < nCol; i++) {
inFile >> values[i][y];
}
i = 0;
}
}
Instead of using
float values[nCol][nLin];
use
std::vector<std::vector<float>> v;
You have to #include<vector> for this.
Now you don't need to worry about size.
Adding elements is as simple as
std::vector<float> f; f.push_back(7.5); v.push_back(f);
Also do not use .eof() on streams, because it doesn't set it until after the end has been reached and so it will attempt to read the end of the file.
while(!inFile.eof())
Should be
while (inFile >> values[i][y]) // returns true as long as it reads in data to values[x][y]
NOTE: Instead of vector, you can also use std::array, which is apparently the best thing after sliced bread.
My suggestion:
const int nCol = countCols; // read from file
std::vector<std::vector<float>> values; // your entire data-set of values
std::vector<float> line(nCol, -1.0); // create one line of nCol size and fill with -1
// reads file to end of *file*, not line
bool done = false;
while (!done)
{
for (int i = 0; !done && i < nCol; i++)
{
done = !(inFile >> line[i]);
}
values.push_back(line);
}
Now your dataset has:
values.size() // number of lines
and can be adressed with array notation also (besides using iterators):
float v = values[i][j];
Note: this code does not take into account the fact that the last line may have less that nCol data values, and so the end of the line vector will contain wrong values at end of file. You may want to add code to clear the end of the line vector when done becomes false, before you push it into values.
I am trying to read binary data using Matlab but when I wrote the code, and the data I have is not right.
The binary file is written:
ofstream outfile2 (outfilename2.c_str() , ofstream::binary);
...
vector<complex<double> > Cxy(2560);
... // data collected from the device.
for(unsigned j = 0; j < 2560; j++)
{
outfile2.write((const char*)&Cxy[j], sizeof(Cxy[j]));
outfile2.flush();
}
when I read the data using:
double fft_len = 256*10;
std::vector<std::complex<double> > vtr(fft_len);
vtr.resize(fft_len);
{
std::ifstream input("data.bin", std::ifstream::binary);
for (int i=0; i< fft_len; ++i)
input.read((char *)&vtr[i],sizeof(vtr[i]));
}
for (size_t i=0; i<vtr.size(); ++i)
std::cout << vtr[i] << endl; // data look like: //(1.31649816e+07,3.97112323e+06)
I used copy & paste data into matlab, then plot it out.
I also want to try different way to read data by using Matlab directly:
filename = 'data.bin';
fid = fopen(filename);
fseek(fid,4*2560,'bof');
y = fread(fid,[2,inf],'long');
x = complex(y(1,:),y(2,:));
plot(abs(x));
figure, plot(abs(x));
but the length of x only 104 but not 2560 as I expected.
So any one can help?
Thank you so much.
This program uses sockets to transfer highly redundant 2D byte arrays (image like). While the transfer rate is comparatively high (10 Mbps), the arrays are also highly redundant (e.g. Each row may contain several consequently similar values).
I have tried zlib and lz4 and the results were promising, however I still think of a better compression method and please remember that it should be relatively fast as in lz4. Any suggestions?
You should look at the PNG algorithms for filtering image data before compressing. They are simple to more sophisticated methods for predicting values in a 2D array based on previous values. To the extent that the predictions are good, the filtering can make for dramatic improvements in the subsequent compression step.
You should simply try these filters on your data, and then feed it to lz4.
you could create your own, if the data in rows is similar you can create a resource / index map thus reducing substantial the size, something like this
Original file:
row 1: 1212, 34,45,1212,45,34,56,45,56
row 2: 34,45,1212,78,54,87,....
you could create a list of unique values, than use and index in replacement,
34,45,54,56,78,87,1212
row 1: 6,0,2,6,1,0,.....
this can potantialy save you over 30% or more data transfer, but it depends on how redundant the data is
UPDATE
Here a simple implementation
std::set<int> uniqueValues
DataTable my2dData; //assuming 2d vector implementation
std::string indexMap;
std::string fileCompressed = "";
int Find(int value){
for(int i = 0; i < uniqueValues.size; ++i){
if(uniqueValues[i] == value) return i;
}
return -1;
}
//create list of unique values
for(int i = 0; i < my2dData.size; ++i){
for(int j = 0; j < my2dData[i].size; ++j){
uniqueValues.insert(my2dData[i][j]);
}
}
//create indexes
for(int i = 0; i < my2dData.size; ++i){
std::string tmpRow = "";
for(int j = 0; j < my2dData[i].size; ++j){
if(tmpRow == ""){
tmpRow = Find(my2dData[i][j]);
}
else{
tmpRow += "," + Find(my2dData[i][j]);
}
}
tmpRow += "\n\r";
indexMap += tmpRow;
}
//create file to transfer
for(int k = 0; k < uniqueValues.size; ++k){
if(fileCompressed == ""){
fileCompressed = "i: " + uniqueValues[k];
}
else{
fileCompressed += "," + uniqueValues[k];
}
}
fileCompressed += "\n\r\d:" + indexMap;
now on the receiving end you just do the opposite, if the line start with "i" you get the index, if it start with "d" you get the data
I'm having a problem with one of my functions, I'm working on a simple tile map editor, and I'm trying to implement a 3D array to keep track of tiles (x,y, layer). Before this I had a 1D array where all the tiles were just listed sequencially:
bool Map::OnLoad(char* File) {
TileList.clear();
FILE* FileHandle = fopen(File, "r");
if(FileHandle == NULL) {
return false;
}
for(int Y = 0;Y < MAP_HEIGHT;Y++) {
for(int X = 0;X < MAP_WIDTH;X++) {
Tile tempTile;
fscanf(FileHandle, "%d:%d ", &tempTile.TileID, &tempTile.TilePassage);
TileList.push_back(tempTile);
}
fscanf(FileHandle, "\n");
}
fclose(FileHandle);
return true;
}
This basically read strings from the file which looked like:
2:1 1:0 3:2...
Where the first number states the tileID and the second one states the Tile passability. The above function works. My 3D arrays are also correctly constructed, I tested them with simple assignments and calling values out of it. The function that gives me problems is the following (please note that the number 2 i.e. OnLoad2() was added so I can keep the old variables and the function untouched until the prototype is working):
bool Map::OnLoad2(char* File) {
TileList2.clear();
FILE* FileHandle2 = fopen(File, "r");
if(FileHandle2 == NULL) {
return false;
}
for(int Y = 0;Y < MAP_HEIGHT;Y++) {
for(int X = 0;X < MAP_WIDTH;X++) {
Tile tempTile;
fscanf(FileHandle2, "%d:%d ", &tempTile.TileID, &tempTile.TilePassage);
TileList2[X][Y][0] = tempTile;
}
fscanf(FileHandle2, "\n");
}
fclose(FileHandle2);
return true;
}
While this function doesn't trigger the compiler to report any errors, as soon as the application starts, it freezes up and crashes. For additional information MAP_WIDTH and MAP_HEIGHT are set to 40 each and the 3D array was constructed like this:
TileList2.resize(MAP_HEIGHT);
for (int i = 0; i < MAP_HEIGHT; ++i) {
TileList2[i].resize(MAP_WIDTH);
for (int j = 0; j < MAP_WIDTH; ++j)
TileList2[i][j].resize(3);
}
I would appreciate it if you could point me out what do I need to fix, as far as I know I must have messed up the for loop structure, as the 3D array initializes and works properly. Thank you for your help!
TileList2.clear();
This line reinitializes TileList2, so it is back to a zero-length vector. Delete that line, and you will probably be okay.