Combining GDAL dataset tiles in C++ - c++

I'm getting some prototype Python code that uses GDAL moved to C++ and I'm stuck on a section that reads tiles of elevation data from a directory and combines them into a single dataset. The Python code looks like:
# Read files containing elevation data in 1° x 1° squares.
geo_file_list = glob.glob ("./geodata/*.hgt")
# Turn the list of files into a list of datasets.
elv_dataset_list = list(map(gdal.Open, geo_file_list))
# Merge datasets into a single one.
elv_dataset = gdal.BuildVRT('', elv_dataset_list)
I'm never writing out the resulting VRT, only keeping it memory for further processing. I'd create a MEM driver and use that if possible.
In C++, I'm starting with:
// Set data directory.
std::string elev_dir = "./geodata/";
// Get list of files in data directory.
std::vector<GDALDataset *> elev_datasets;
// Read files into datasets.
for(auto& p: std::filesystem::directory_iterator(elev_dir)) {
elev_datasets.push_back((GDALDataset *) GDALOpen (p.path().c_str(), GA_ReadOnly));
}
The part I'm struggling with is sending the resulting std::vector to GDALBuildVRT(). Very possible that I'm going about this all wrong!

The trick was realizing that nullptr is an acceptable input into GDALBuildVRT() for arguments that should be ignored. Hence, the call looks like:
GDALDataset *dest_dataset = (GDALDataset *) GDALBuildVRT("", elev_datasets.size(), (GDALDatasetH *) elev_datasets.data(), nullptr, nullptr, nullptr);

Related

Get raw buffer for in-memory dataset in GDAL C++ API

I have generated a GeoTiff dataset in-memory using GDALTranslate() with a /vsimem/ filepath. I need access to the buffer for the actual GeoTiff file to put it in a stream for an external API. My understanding is that this should be possible with VSIGetMemFileBuffer(), however I can't seem to get this to return anything other than nullptr.
My code is essentially as follows:
//^^ GDALDataset* srcDataset created somewhere up here ^^
//psOptions struct has "-b 4" and "-of GTiff" settings.
const char* filep = "/vsimem/foo.tif";
GDALDataset* gtiffData = GDALTranslate(filep, srcDataset, psOptions, nullptr);
vsi_l_offset size = 0;
GByte* buf = VSIGetMemFileBuffer(filep, &size, true); //<-- returns nullptr
gtiffData seems to be a real dataset on inspection, it has all the appropriate properties (number of bands, raster size, etc). When I provide a real filesystem location to GDALTranslate() rather than the /vsimem/ path and load it up in QGIS it renders correctly too.
Looking a the source for VSIGetMemFileBuffer(), this should really only be returning nullptr if the file can't be found. This suggests i'm using it incorrectly. Does anyone know what the correct usage is?
Bonus points: Is there a better way to do this (stream the file out)?
Thanks!
I don't know anything about the C++ API. But in Python, the snippet below is what I sometimes use to get the contents of an in-mem file. In my case mainly VRT's but it shouldn't be any different for other formats.
But as said, I don't know if the VSI-api translate 1-on-1 to C++.
from osgeo import gdal
filep = "/vsimem/foo.tif"
# get the file size
stat = gdal.VSIStatL(filep, gdal.VSI_STAT_SIZE_FLAG)
# open file
vsifile = gdal.VSIFOpenL(filep, 'r')
# read entire contents
vsimem_content = gdal.VSIFReadL(1, stat.size, vsifile)
In the case of a VRT the content would be text, shown with something like print(vsimem_content.decode()). For a tiff it would of course be binary data.
I came back to this after putting in a workaround, and upon swapping things back over it seems to work fine. #mmomtchev suggested looking at the CPL_DEBUG output, which showed nothing unusual (and was silent during the actual VSIGetMemFileBuffer call).
In particular, for other reasons I had to put a GDALWarp call in between calling GDALTranslate and accessing the buffer, and it seems that this is what makes the difference. My guess is that GDALWarp is calling VSIFOpenL internally - although I can't find this in the source - and this does some kind of initialisation for VSIGetMemFileBuffer. Something to try for anyone else who encounters this.

Pass Binary string/file content from c++ to node js

I'm trying to pass the content of a binary file from c++ to node using the node-gyp library. I have a process that creates a binary file using the .fit format and I need to pass the content of the file to js to process it. So, my first aproach was to extract the content of the file in a string and try to pass it to node like this.
char c;
std::string content="";
while (file.get(c)){
content+=c;
}
I'm using the following code to pass it to Node
v8::Local<v8::ArrayBuffer> ab = v8::ArrayBuffer::New(args.GetIsolate(), (void*)content.data(), content.size());
args.GetReturnValue().Set(ab);
In node a get an arrayBuffer but when I print the content to a file it is different to the one that show a c++ cout.
How can I pass the binary data succesfully?
Thanks.
Probably the best approach is to write your data to a binary disk file. Write to disk in C++; read from disk in NodeJS.
Very importantly, make sure you specify BINARY MODE.
For example:
myFile.open ("data2.bin", ios::out | ios::binary);
Do not use "strings" (at least not unless you want to uuencode). Use buffers. Here is a good example:
How to read binary files byte by byte in Node.js
var fs = require('fs');
fs.open('file.txt', 'r', function(status, fd) {
if (status) {
console.log(status.message);
return;
}
var buffer = new Buffer(100);
fs.read(fd, buffer, 0, 100, 0, function(err, num) {
...
});
});
You might also find these links helpful:
https://nodejs.org/api/buffer.html
<= Has good examples for specific Node APIs
http://blog.paracode.com/2013/04/24/parsing-binary-data-with-node-dot-js/
<= Good discussion of some of the issues you might face, including "endianness" and "interpreting numbers"
ADDENDUM:
The OP clarified that he's considering using C++ as a NodeJS Add-On (not a standalone C++ program.
Consequently, using buffers is definitely an option. Here is a good tutorial:
https://community.risingstack.com/using-buffers-node-js-c-plus-plus/
If you choose to go this route, I would DEFINITELY download the example code and play with it first, before implementing buffers in your own application.
It depends but for example using redis
Values can be strings (including binary data) of every kind, for
instance you can store a jpeg image inside a value. A value can't be
bigger than 512 MB.
If the file is bigger than 512MB, then you can store it in chunks.
But I wouldnt suggest since this is an in-memory data store
Its easy to implement in both c++ and node.js

Writing ITK (insight Toolkit) results to local buffer

After applying an ITK filter pipeline, how do I write back the result to a buffer to be used (outside ITK)?
The Insight Software Guide has an example Book 1: Chapter 4.1.7: "Importing Image Data from a Buffer", and the same example is also found in the WikiExamples.
It shows how one can wrap a ITK pointer around a C++ array to use it further by using the ImportImageFilter object.
However, this example then uses a Writer object to write the filtered result to a file.
How do I write the filtered result into another C++ array instead? Or how do I overwrite the array I've used as input?
In essence, I've an application which contains an image in a buffer (localBuffer) which I can wrap following the example code:
[...]
const bool filterOwnsBuffer= false;
importFilter->SetImportPointer( localBuffer, size[0]*size[1], filterOwnsBuffer );
I can then use it it in any itk pipeline and 'update' it at a certain stage:
[...]
FilterType::Pointer filter = FilterType::New();
filter->SetInput( importFilter->GetOutput() );
filter->Update();
How do I now ensure that localbuffer has the filtered values? Or, alternatively, how do I set a different resultbuffer to the output values? Do I have to use the image iterator and 'loop' over my buffer manually? Or can I use the filter->GetOutput() more directly?
A little code example or a link to an according example would be very much appreciated.
(Simply the "Exporting Image Data to a Buffer" equivalent to the given import example.)
ImageType::Pointer output = filter->GetOutput();
ImageType::PixelContainer * outputContainer = output->GetPixelContainer();
ImageType::PixelContainer::Element * resultBuffer = outputContainer->GetBufferPointer();
See the Image documentation and ImportImageContainer documentation.
Here is the remedy for me:
memcpy( buffer, filter->GetOutput()->GetBufferPointer(),
size[0]*size[1]*sizeof(InputPixelType));
This works because by the time the filter is destroyed the buffer was already parsed to "buffer", which is the pointer to your data.

Append to a JSON array in a JSON file on disk, every second using C++

This is my first post here, so please bear with me.
I have searched high and low on the internet for an answer, but I've not been able to resolve my issue, so I have decided to write a post here.
I am trying to write(append) to a JSON array on file using C++ and JZON, at intervals of 1 write each second. The JSON file is initially written by a “Prepare” function. Another function is then called each second to a add an array to the JSON file and append an new object to the array every second.
I have tried many things, most of which resulted in all sorts of issues. My latest attempt gave me the best results and this is the code that I have included below. However, the approach I took is very inefficient as I am writing an entire array every second. This is having a massive hit on CPU utilisation as the array grows, but not so much on memory as I had first anticipated.
What I really would like to be able to do is to append to an existing array contained in a JSON file on disk, line by line, rather than having to clear the entire array from the JSON object and rewriting the entire file, each and every second.
I am hoping that some of the geniuses on this website will be able to point me in the right direction.
Thank you very much in advance.
Here is my code:
//Create some object somewhere at the top of the cpp file
Jzon::Object jsonFlight;
Jzon::Array jsonFlightPath;
Jzon::Object jsonCoordinates;
int PrepareFlight(const char* jsonfilename) {
//...SOME PREPARE FUNCTION STUFF GOES HERE...
//Add the Flight Information to the jsonFlight root JSON Object
jsonFlight.Add("Flight Number", flightnum);
jsonFlight.Add("Origin", originicao);
jsonFlight.Add("Destination", desticao);
jsonFlight.Add("Pilot in Command", pic);
//Write the jsonFlight object to a .json file on disk. Filename is passed in as a param of the function.
Jzon::FileWriter::WriteFile(jsonfilename, jsonFlight, Jzon::NoFormat);
return 0;
}
int UpdateJSON_FlightPath(ACFT_PARAM* pS, const char* jsonfilename) {
//Add the current returned coordinates to the jsonCoordinates jzon object
jsonCoordinates.Add("altitude", pS-> altitude);
jsonCoordinates.Add("latitude", pS-> latitude);
jsonCoordinates.Add("longitude", pS-> longitude);
//Add the Coordinates to the FlightPath then clear the coordinates.
jsonFlightPath.Add(jsonCoordinates);
jsonCoordinates.Clear();
//Now add the entire flightpath array to the jsonFlight object.
jsonFlight.Add("Flightpath", jsonFlightPath);
//write the jsonFlight object to a JSON file on disk.
Jzon::FileWriter::WriteFile(jsonfilename, jsonFlight, Jzon::NoFormat);
//Remove the entire jsonFlighPath array from the jsonFlight object to avoid duplicaiton next time the function executes.
jsonFlight.Remove("Flightpath");
return 0;
}
For sure you can do "flat file" storage yourself.. but this is a symptom of needing a database. Something very light like SQLite, or mid-weight & open-source like MySQL, FireBird, or PostgreSQL.
But as to your question:
1) Leave the closing ] bracket off, and just keep the file open & appending -- but if you don't close the file correctly, it will be damaged & need repair to be readable.
2) Your current option -- writing a complete file each time -- isn't safe from data loss either, as the moment you "open to overwrite" you lose all data previously stored in the file. The workaround here, is to rename the old file as a backup before you start writing.
You should also make backup copies of your file, with the first option. (Say at daily intervals). Otherwise data loss is likely to occur eventually -- on Ctrl-C, power loss, program error or system crash.
Of course if you use any of SQLlite, MySQL, Firebird or PostgreSQL all the data-integrity problems will be handled for you.

ERROR_NOT_ENOUGH_MEMORY Error when writing INI using WritePrivateProfileString, after 200k calls

I'm making simple dll packet sniffer using C++, that will hook to the apps, and write the received packet into INI file. Unfortunately after 20-30 minutes it crashed the main apps.
When the packet is received, receivedPacket() will be called. After 20+ minutes, WriteCount value is around 150,000-200,000.. and starting to get C++ runtime error/crash, GetLastError() code that I get is 0x8, which is ERROR_NOT_ENOUGH_MEMORY, and the WritePrivateProfileStringA() returns 0
void writeToINI(LPCSTR iSec,LPCSTR iKey,int iVal){
sprintf(inival, _T("%d"), iVal);
WritePrivateProfileStringA(iSec,iKey,inival,iniloc);
//sprintf(strc, _T("%d \n"), WriteCount);
//WriteConsole(GetStdHandle(STD_OUTPUT_HANDLE), strc, strlen(strc), 0, 0);
WriteCount++;
}
void receivedPacket(char *packet,WORD size){
switch ( packet[2] )
{
case 0x30:
// Size : 0x5F
ID = *(signed char*)&packet[0x10];
X = *(signed short*)&packet[0x20];
Y = *(signed short*)&packet[0x22];
Z = *(signed short*)&packet[0x24];
sprintf(inisec, _T("PACKET_%d"), (ID+1));
writeToINI(inisec,"id",ID);
writeToINI(inisec,"x",X);
writeToINI(inisec,"y",Y);
writeToINI(inisec,"z",Z);
}
[.....OTHER CASES.....]
}
Thanks :)
WritePrivateProfileString() and GetPrivateProfileString() are very slow (due to parsing INI file each call), instead you can:
use one of existing parsing libraries, but i am not sure about memory efficiency nor supporting sequential write.
write your own sequential INI writter:
read file (or only part, by part, if it is too big)
find section and key (if not found, create new section at end of file, or find insertion position, if you want sorted sections), save file position of key and next key
change value
save (beginning of original file to position of key + actual changed key + position of next key in original file to end of file) (if new section is created at end, you can simply append new section to original file) (if packets rewrite same ID often, you can add padding whitespace after each key, large to hold any value of desired type (example: change X=1---\n to X=100-\n (change - to whitespace), so you have constant size of key, you can update only part of file) )
database, for example MySQL
write binary file (fastest solution) and make program to read values, or to convert to text
Little note: I use GetPrivateProfileString() few years ago to read settings file (about 1KB of size), reading form HDD: 50ms, reading from USB flash disk: 1000ms!, after changing (1. read file to memory 2. run my own parser) it run in 1ms both on HDD and USB.
Thanks for the reply guys, but looks like the problem wasn't come from WritePrivateProfileStringA().
I just need to add extra size in malloc() for the Hook.
:)