Writing BLOB data to a SQL Server Database using ADO - c++

I need to write a BLOB to a varbinary column in a SQL Server database. Sounds easy except that I have to do it in C++. I've been using ADO for the database operations (First question: is this the best technology to use?) So i've got the _Stream object, and a record set object created and the rest of the operation falls apart from there. If someone could provide a sample of how exactly to perform this seemingly simple operation that would be great!. My binary data is stored in a unsigned char array. Here is the codenstein that i've stitched together from what little I found on the internet:
_RecordsetPtr updSet;
updSet.CreateInstance(__uuidof(Recordset));
updSet->Open("SELECT TOP 1 * FROM [BShldPackets] Order by ChunkId desc",
_conPtr.GetInterfacePtr(), adOpenDynamic, adLockOptimistic, adCmdText);
_StreamPtr pStream ; //declare one first
pStream.CreateInstance(__uuidof(Stream)); //create it after
_variant_t varRecordset(updSet);
//pStream->Open(varRecordset, adModeReadWrite, adOpenStreamFromRecord, _bstr_t("n"), _bstr_t("n"));
_variant_t varOptional(DISP_E_PARAMNOTFOUND,VT_ERROR);
pStream->Open(
varOptional,
adModeUnknown,
adOpenStreamUnspecified,
_bstr_t(""),
_bstr_t(""));
_variant_t bytes(_compressStreamBuffer);
pStream->Write(_compressStreamBuffer);
updSet.GetInterfacePtr()->Fields->GetItem("Chunk")->Value = pStream->Read(1000);
updSet.GetInterfacePtr()->Update();
pStream->Close();

As far as ADO being the best technology in this case ... I'm not really sure. I personally think using ADO from C++ is a painful process. But it is pretty generic if you need that. I don't have a working example of using streams to write data at that level (although, somewhat ironically, I have code that I wrote using streams at the OLE DB level. However, that increases the pain level many times).
If, though, your data is always going to be loaded entirely in memory, I think using AppendChunk would be a simpler route:
ret = updSet.GetInterfacePtr()->Fields->
Item["Chunk"]->AppendChunk( L"some data" );

Related

Get raw buffer for in-memory dataset in GDAL C++ API

I have generated a GeoTiff dataset in-memory using GDALTranslate() with a /vsimem/ filepath. I need access to the buffer for the actual GeoTiff file to put it in a stream for an external API. My understanding is that this should be possible with VSIGetMemFileBuffer(), however I can't seem to get this to return anything other than nullptr.
My code is essentially as follows:
//^^ GDALDataset* srcDataset created somewhere up here ^^
//psOptions struct has "-b 4" and "-of GTiff" settings.
const char* filep = "/vsimem/foo.tif";
GDALDataset* gtiffData = GDALTranslate(filep, srcDataset, psOptions, nullptr);
vsi_l_offset size = 0;
GByte* buf = VSIGetMemFileBuffer(filep, &size, true); //<-- returns nullptr
gtiffData seems to be a real dataset on inspection, it has all the appropriate properties (number of bands, raster size, etc). When I provide a real filesystem location to GDALTranslate() rather than the /vsimem/ path and load it up in QGIS it renders correctly too.
Looking a the source for VSIGetMemFileBuffer(), this should really only be returning nullptr if the file can't be found. This suggests i'm using it incorrectly. Does anyone know what the correct usage is?
Bonus points: Is there a better way to do this (stream the file out)?
Thanks!
I don't know anything about the C++ API. But in Python, the snippet below is what I sometimes use to get the contents of an in-mem file. In my case mainly VRT's but it shouldn't be any different for other formats.
But as said, I don't know if the VSI-api translate 1-on-1 to C++.
from osgeo import gdal
filep = "/vsimem/foo.tif"
# get the file size
stat = gdal.VSIStatL(filep, gdal.VSI_STAT_SIZE_FLAG)
# open file
vsifile = gdal.VSIFOpenL(filep, 'r')
# read entire contents
vsimem_content = gdal.VSIFReadL(1, stat.size, vsifile)
In the case of a VRT the content would be text, shown with something like print(vsimem_content.decode()). For a tiff it would of course be binary data.
I came back to this after putting in a workaround, and upon swapping things back over it seems to work fine. #mmomtchev suggested looking at the CPL_DEBUG output, which showed nothing unusual (and was silent during the actual VSIGetMemFileBuffer call).
In particular, for other reasons I had to put a GDALWarp call in between calling GDALTranslate and accessing the buffer, and it seems that this is what makes the difference. My guess is that GDALWarp is calling VSIFOpenL internally - although I can't find this in the source - and this does some kind of initialisation for VSIGetMemFileBuffer. Something to try for anyone else who encounters this.

How to buffer efficiently when writing to 1000s of files in C++

I am quite inexperienced when it comes to C++ I/O operations especially when dealing with buffers etc. so please bear with me.
I have a programme that has a vector of objects (1000s - 10,000s). At each time-step the state of the objects is updated. I want to have the functionality to log a complete state time history for each of these objects.
Currently I have a function that loops through my vector of objects, updates the state, and then calls a logging function which opens the file (ascii) for that object, writes the state to file, and closes the file (using std::ofstream). The problem is this signficantly slows down my run time.
I've been recommended a couple things to do to help speed this up:
Buffer my output to prevent extensive I/O calls to the disk
Write to binary not ascii files
My question mainly concerns 1. Specifically, how would I actually implement this? Would each object effectively require it's own buffer? or would this be a single buffer that somehow knows which file to send each bit of data? If the latter, what is the best way to achieve this?
Thanks!
Maybe the simplest idea first: instead of logging to separate files, why not log everything to an SQLite database?
Given the following table structure:
create table iterations (
id integer not null,
iteration integer not null,
value text not null
);
At the start of the program, prepare a statement once:
sqlite3_stmt *stmt;
sqlite3_prepare_v3(db, "insert into iterations values(?,?,?)", -1, SQLITE_PREPARE_PERSISTENT, &stmt, NULL);
The question marks here are placeholders for future values.
After every iteration of your simulation, you could walk your state vector and execute the stmt a number of times to actually insert rows into the database, like so:
for (int i = 0; i < objects.size(); i++) {
sqlite3_reset(stmt);
// Fill in the three placeholders and execute the query.
sqlite3_bind_int(stmt, 1, i);
sqlite3_bind_int(stmt, 2, current_iteration); // Could be done once, but here for illustration.
std::string state = objects[i].get_state();
sqlite3_bind_text(stmt, 3, state.c_str(), state.size(), SQLITE_STATIC); // SQLITE_STATIC means "no need to free this"
sqlite3_step(stmt); // Execute the query.
}
You can then easily query the history of each individual object using the SQLite command-line tool or any database manager that understands SQLite.

Sending data from client to SQL database (MoSync)

I sincerely apologize if this has been asked before, however I was unable to find a suitable answer that appeared similar to my current situation. I am developing an app with MoSync using the MAUI because of the same appearance across all platforms. I am running into issues with understanding MAHandles, as well as how to go about sending the SQLite information to a web address. The SQLite commands will then be converted to MySQL commands using a RedBean PHP script, and then sent to the permanent database.
My biggest concerns are 2 items:
1.Declaring connections that are usable through MAHandles (I have already gotten the SQLite commands working without using MAHandles, however declaring the database address in the resources.lstx still evades me)
2.Declaring MAHandles in general.
Also, I understand that strings are much more effective, however I disregard that fact due to the age of MAUI and it's capabilities appear much smoother when using char arrays.
I can provide additional clarification if needed so that I can help speed this process up.
Thank you ahead of time, and hopefully this will help others trying their hands at MoSync's immaculate product.
I have no experience with SQLite whatsoever, but I'm assuming handling SQLite commands is the job of your server-side application. To be clear, you are sending SQLite commands from your mobile app to a server-side app via a URL, correct? If you need help on this you should search "CGI". CGI is essentially a way to execute a server-side application with arguments via an http:// request.
This means your app should have a manager that constructs a URL with the right SQLite commands based on the input events sent to your mobile app (buttons, text fields, etc).
As far as Mosync goes, MAHandles can be used for many things including downloading.
Take a look at the MAUtil::DownloadListener class on Mosync's doxygen pages.
You will see that there are full descriptions of 5 pure virtual functions that you will need to implement.
The bulk of your code will probably be in finishedDownloading( Downloader* dl, MAHandle data ). It is here that the MAHandle "data", will point to the beginning of the data segment that you downloaded.
I read my data into a char* since I am downloading text.
Here's a snippet:
void MainScreen::finishedDownloading( Downloader* dl, MAHandle data )
{
char* mData = new char[ maGetDataSize( data ) + 1 ];
memset( mData, 0, maGetDataSize( data ) + 1 );
maReadData( data, mData, 0, maGetDataSize( data ) );
// Destroy the store
maDestroyObject( data );
// Do something with mData;
}
Here's one example of setting the font of NativeUI::Widget text using an MAHandle:
MAHandle font = maFontLoadDefault( FONT_TYPE_SERIF |
FONT_TYPE_MONOSPACE |
FONT_STYLE_NORMAL, 0, Dimensions::DIM_LIST_ELEM_FONT_SIZE );
ListViewItem* items = new ListViewItem();
items -> setFont( font );

What is the best way to return an image or video file from a function using c++?

I am writing a c++ library that fetches and returns either image data or video data from a cloud server using libcurl. I've started writing some test code but still stuck at designing API because I'm not sure about what's best way to handle these media files. Storing it in a char/string variable as binary data seems to work, but I wonder if that would take up too much RAM memory if the files are too big. I'm new to this, so please suggest a solution.
You can use something like zlib to compress it in memory, and then uncompress it only when it needs to be used; however, most modern computers have quite a lot of memory, so you can handle quite a lot of images before you need to start compressing. With videos, which are effectively a LOT of images, it becomes a bit more important -- you tend to decompress as you go, and possibly even stream-from-disk as you go.
The usual way to handle this, from an API point of view, is to have something like an Image object and a Video object (classes). These objects would have functions to "get" the uncompressed image/frame. The "get" function would check to see if the data is currently compressed; if it is, it would decompress it before returning it; if it's not compressed, it can return it immediately. The way the data is actually stored (compressed/uncompressed/on disk/in memory) and the details of how to work with it are thus hidden behind the "get" function. Most importantly, this model lets you change your mind later, adding additional types of compression, adding disk-streaming support, etc., without changing how the code that calls the get() function is written.
The other challenge is how you return an Image or Video object from a function. You can do it like this:
Image getImageFromURL( const std::string &url );
But this has the interesting problem that the image is "copied" during the return process (sometimes; depends how the compiler optimizes things). This way is more memory efficient:
void getImageFromURL( const std::string &url, Image &result );
This way, you pass in the image object into which you want your image loaded. No copies are made. You can also change the 'void' return value into some kind of error/status code, if you aren't using exceptions.
If you're worried about what to do, code for both returning the data in an array and for writing the data in a file ... and pass the responsability to choose to the caller. Make your function something like
/* one of dst and outfile should be NULL */
/* if dst is not NULL, dstlen specifies the size of the array */
/* if outfile is not NULL, data is written to that file */
/* the return value indicates success (0) or reason for failure */
int getdata(unsigned char *dst, size_t dstlen,
const char *outfile,
const char *resource);

How can I load a SQLITE database from a buffer with the C API?

I am trying to load a database from the memory instead of opening a .sqlite file. I have read the C/C++ API reference but I can not find the proper method.
The buffer I am trying to load is simply an sqlite file loaded in memory. I just want to use this buffer (a const char* array) without using the filesystem (I could have saved this buffer in a file, then load the file, but no).
First, I create a memory DB :
mErrorCode = sqlite3_open_v2(":memory:", &mSqlDatabase, lMode, NULL);
This returns SQLITE_OK, then I tried to use the buffer as a statement and call preparev2(MyDB, MyBufferData, MyBufferLength, MyStatement, NULL) but it's not really a statement, and it returns an error. Same result if I call directly exec(MyDB, MyBufferData, NULL, NULL, NULL);
I guess there is an appropriate method to achieve this as it might be common to load a DB from a stream or from decrypted data...
Thanks.
I believe that the proper way to do this is to create an SQLite VFS layer around your buffer, as mentioned in this thread. A possible/partial solution, spmemvfs, is mentioned here, although I have never tried it.
If you don't need concurrency for your DB, creating your own VFS implementation should be quite simple.