Reading content of a InMemoryRandomAccessStream into a Buffer using C++/WinRT - c++

I'm trying to serialise the content of an winrt::Windows::UI::Input::Inking::InkStrokeContainer. Microsoft provides the very convenient ISF format to do this.
Now I want to store the content of InkStrokeContainer into a winrt::Windows::Storage::Streams::Buffer object as follows:
// This code is running in a worker thread
using namespace winrt::Windows::Foundation;
using namespace winrt::Windows::UI::Input;
using namespace winrt::Windows::UI::Input::Inking;
using namespace winrt::Windows::Storage::Streams;
InkStrokeContainer inkContainer;
// Add some strokes to the inkContainer
InMemoryRandomAccessStream stream;
inkContainer.SaveAsync(stream, InkPersistenceFormat::Isf).get();
Buffer buffer(stream.Size());
stream.ReadAsync(buffer, buffer.Capacity(), InputStreamOptions::None).get();
std::cout << "Buffer size: " << buffer.Length() << "Stream size: " << stream.Size() << std::endl;
However, this code doesn't work and the output of the last line is always:
Buffer size: 0 Stream size: 1102
It seems that nothing is being written into the buffer and I cannot figure out why.

Related

Portable way to read a file in C++ and handle possible errors

I want to do a simple thing: read the first line from a file, and do a proper error reporting in case there is no such file, no permission to read the file and so on.
I considered the following options:
std::ifstream. Unfortunately, there is no portable way to report system errors. Some other answers suggest checking errno after reading failed, but the standard does not guarantee that errno is set by any functions in iostreams library.
C style fopen/fread/fclose. This works, but is not as convenient as iostreams with std::getline. I'm looking for C++ solution.
Is there any way to accomplish this using C++14 and boost?
Disclaimer: I am the author of AFIO. But exactly what you are looking for is https://ned14.github.io/afio/ which is the v2 library incorporating the feedback from its Boost peer review in August 2015. See the list of features here.
I will of course caveat that this is an alpha quality library, and you should not use it in production code. However, quite a few people already are doing so.
How to use AFIO to solve the OP's problem:
Note that AFIO is a very low level library, hence you have to type a lot more code to achieve the same as iostreams, on the other hand you get no memory allocation, no exception throwing, no unpredictable latency spikes:
// Try to read first line from file at path, returning no string if file does not exist,
// throwing exception for any other error
optional<std::string> read_first_line(filesystem::path path)
{
using namespace AFIO_V2_NAMESPACE;
// The result<T> is from WG21 P0762, it looks quite like an `expected<T, std::error_code>` object
// See Outcome v2 at https://ned14.github.io/outcome/ and https://lists.boost.org/boost-announce/2017/06/0510.php
// Open for reading the file at path using a null handle as the base
result<file_handle> _fh = file({}, path);
// If fh represents failure ...
if(!_fh)
{
// Fetch the error code
std::error_code ec = _fh.error();
// Did we fail due to file not found?
// It is *very* important to note that ec contains the *original* error code which could
// be POSIX, or Win32 or NT kernel error code domains. However we can always compare,
// via 100% C++ 11 STL, any error code to a generic error *condition* for equivalence
// So this comparison will work as expected irrespective of original error code.
if(ec == std::errc::no_such_file_or_directory)
{
// Return empty optional
return {};
}
std::cerr << "Opening file " << path << " failed with " << ec.message() << std::endl;
}
// If errored, result<T>.value() throws an error code failure as if `throw std::system_error(fh.error());`
// Otherwise unpack the value containing the valid file_handle
file_handle fh(std::move(_fh.value()));
// Configure the scatter buffers for the read, ideally aligned to a page boundary for DMA
alignas(4096) char buffer[4096];
// There is actually a faster to type shortcut for this, but I thought best to spell it out
file_handle::buffer_type reqs[] = {{buffer, sizeof(buffer)}};
// Do a blocking read from offset 0 possibly filling the scatter buffers passed in
file_handle::io_result<file_handle::buffers_type> _buffers_read = read(fh, {reqs, 0});
if(!_buffers_read)
{
std::error_code ec = _fh.error();
std::cerr << "Reading the file " << path << " failed with " << ec.message() << std::endl;
}
// Same as before, either throw any error or unpack the value returned
file_handle::buffers_type buffers_read(_buffers_read.value());
// Note that buffers returned by AFIO read() may be completely different to buffers submitted
// This lets us skip unnecessary memory copying
// Make a string view of the first buffer returned
string_view v(buffers_read[0].data, buffers_read[0].len);
// Sub view that view with the first line
string_view line(v.substr(0, v.find_first_of('\n')));
// Return a string copying the first line from the file, or all 4096 bytes read if no newline found.
return std::string(line);
}
People on boost-users mailing list pointed out that the boost.beast library has OS-independent API for basic file IO including proper error handling. There are three implementations of the file concept out-of-the-box: POSIX, stdio and win32. The implementations support RAII (automatic closing on destruction) and move semantics. The POSIX file model automatically handles EINTR error. Basically this is sufficient and convenient to portably read a file chunk by chunk and, for example, explicitly handle the situation of absence of a file:
using namespace boost::beast;
using namespace boost::system;
file f;
error_code ec;
f.open("/path/to/file", file_mode::read, ec);
if(ec == errc::no_such_file_or_directory) {
// ...
} else {
// ...
}
The best thing to do could be to wrap Boost WinAPI and or POSIX APIs.
The "naive" C++ standard library thing (with bells and wistles) doesn't get you too far:
Live On Coliru
#include <iostream>
#include <fstream>
#include <vector>
template <typename Out>
Out read_file(std::string const& path, Out out) {
std::ifstream s;
s.exceptions(std::ios::badbit | std::ios::eofbit | std::ios::failbit);
s.open(path, std::ios::binary);
return out = std::copy(std::istreambuf_iterator<char>{s}, {}, out);
}
void test(std::string const& spec) try {
std::vector<char> data;
read_file(spec, back_inserter(data));
std::cout << spec << ": " << data.size() << " bytes read\n";
} catch(std::ios_base::failure const& f) {
std::cout << spec << ": " << f.what() << " code " << f.code() << " (" << f.code().message() << ")\n";
} catch(std::exception const& e) {
std::cout << spec << ": " << e.what() << "\n";
};
int main() {
test("main.cpp");
test("nonexistent.cpp");
}
Prints...:
main.cpp: 823 bytes read
nonexistent.cpp: basic_ios::clear: iostream error code iostream:1 (iostream error)
Of course you can add more diagnostics perusing <filesystem> but
that's susceptible to races, as mentioned (depending on your application, these can even open up security vulnerabilities, so just say "No").
Using boost::filesystem::ifstream doesn't change the exceptions raised
Worse still, using Boost IOstream fails to raise any errors:
template <typename Out>
Out read_file(std::string const& path, Out out) {
namespace io = boost::iostreams;
io::stream<io::file_source> s;
s.exceptions(std::ios::badbit | std::ios::eofbit | std::ios::failbit);
s.open(path, std::ios::binary);
return out = std::copy(std::istreambuf_iterator<char>{s}, {}, out);
}
Happily prints:
main.cpp: 956 bytes read
nonexistent.cpp: 0 bytes read
Live On Coliru
#include <iostream>
#include <fstream>
#include <string>
#include <system_error>
using namespace std;
int
main()
{
ifstream f("testfile.txt");
if (!f.good()) {
error_code e(errno, system_category());
cerr << e.message();
//...
}
// ...
}
ISO C++ Standard:
The contents of the header
"cerrno"
are the same as the POSIX header
"errno.h"
, except that
errno
shall
be defined as a macro. [
Note:
The intent is to remain in close alignment with the POSIX standard.
— end
note
] A separate
errno
value shall be provided for each thread.
check this code:
uSTL is a partial implementation of the C++ standard library that focuses on
decreasing the memory footprint of user executables.
https://github.com/msharov/ustl/blob/master/fstream.cc

Armadillo reading MAT file error

I'm currently cross-compiling on the BeagleBone Black in a Visual Studio environment using Armadillo to translate MATLAB code into C++.
This is a signal processing project, so I need a way to read and write binary data files, specifically .mat files. Thankfully, the armadillo documentation says that you can load .mat files directly into a matrix using .load()
I attempted that at first, but it seems like it's not reading the file correctly, nor is it reading all the entries. My reference file is a 2000x6 matrix, and the created armadillo matrix is 5298x1. I know that without an armadillo-mimicking header, it will be converted into a column vector and I will need to reshape it using .reshape(), yet it simply isn't receiving all the entries, and by inspection, the entries it did read are wrong.
I'm not sure what the problem is. I've placed the data reference .mat files in the Debug folder for the remote project on the BBB, where the .out compiled file is created. Is there another way I should integrate it?
Also, help with mimicking the armadillo header or other suggestions are welcome.
If you need anything, please let me know.
Here is the test program I am using:
#include <iostream>
#include <armadillo>
using namespace std;
using namespace arma;
int main()
{
mat data_ref;
data_ref.load("Epoxy_6A_Healthy_Output_200kHz_Act1_001.mat");
cout << "For Data_ref, there are " << data_ref.n_cols << " columns and " << data_ref.n_rows << " rows.\n";
cout << "First item: " << data_ref(0) << "\n6th item: " << data_ref(6) << "\n2000th item: " << data_ref(2000);
data_ref.reshape(2000, 6);
cout << "For Data_ref, there are " << data_ref.n_cols << " columns and " << data_ref.n_rows << " rows.\n";
cout << "First item: " << data_ref(0,0) << "\nLast Item: " << data_ref(1999,5);
cout << "\nDone";
return 0;
}
The first element in the .mat file is 0.0, and the last element is 0.0014.
Here is the output.
For Data_ref, there are 1 columns and 5298 rows.
First item: 8.48749e-53
th item: 9.80727e+256
th item: -2.4474e+238For Data_ref, there are 6 columns and 2000 rows.
First item: 8.48749e-53
(gdb) 1028-var-list-children --simple-values "var4.public" 0 1000
(gdb) 1030-var-list-children --simple-values "var4.arma::Base<double,
arma::Mat<double> >" 0 1000
Last Item: 0
Done=thread-exited,id="1",group-id="i1"
The program '' has exited with code 0 (0x0).
Thanks
Armadillo does not support Matlab's .mat format. In the documentation they refer to the Armadillo mat binary format. You may however save the data in Matlab using the hdf5 binary format and import it into Armadillo but then you have to download the hdf5 lib and reconfigure Armadillo. See the hdf5_binary section in the documentation.

How to disable cout output in the runtime?

I often use cout for debugging purpose in many different places in my code, and then I get frustrated and comment all of them manually.
Is there a way to suppress cout output in the runtime?
And more importantly, let's say I want to suppress all cout outputs, but I still want to see 1 specific output (let's say the final output of the program) in the terminal.
Is it possible to use an ""other way"" of printing to the terminal for showing the program output, and then when suppressing cout still see things that are printed using this ""other way""?
Sure, you can (example here):
int main() {
std::cout << "First message" << std::endl;
std::cout.setstate(std::ios_base::failbit);
std::cout << "Second message" << std::endl;
std::cout.clear();
std::cout << "Last message" << std::endl;
return 0;
}
Outputs:
First message
Last message
This is because putting the stream in fail state will make it silently discard any output, until the failbit is cleared.
To supress output, you can disconnect the underlying buffer from cout.
#include <iostream>
using namespace std;
int main(){
// get underlying buffer
streambuf* orig_buf = cout.rdbuf();
// set null
cout.rdbuf(NULL);
cout << "this will not be displayed." << endl;
// restore buffer
cout.rdbuf(orig_buf);
cout << "this will be dispalyed." << endl;
return 0;
}
Don't use cout for debugging purposes, but define a different object (or function, or macro) that calls through to it, then you can disable that function or macro in one single place.
You can user cerr - standard output stream for errors for your debug purposes.
Also, there is clog - standard output stream for logging.
Typically, they both behave like a cout.
Example:
cerr << 74 << endl;
Details: http://www.cplusplus.com/reference/iostream/cerr/
http://www.cplusplus.com/reference/iostream/clog/
If you include files which involve cout you may want to write the code at the start (outside of main), which can be done like this:
struct Clearer {
Clearer() { std::cout.setstate(std::ios::failbit); }
} output_clearer;
It seems you print debug messages. You could use TRACE within Visual C++/MFC or you just might want to create a Debug() function which takes care of it. You can implement it to turn on only if a distinct flag is set. A lot of programs use a command line parameter called verbose or -v for instance, to control the behavior of their log and debug messages.

C++ Memory leaks on Windows 7

I'm writing a program (C++, MinGW 32 bit) to batch process images using OpenCV functions, using AngelScript as a scripting language. As of right now, my software has some memory leaks that add up pretty quickly (the images are 100-200 MB each, and I'm processing thousands at once) but I'm running into an image where Windows doesn't seem to be releasing the memory used by my program until rebooting.
If I run it on a large set of images, it runs for a while and eventually OpenCV throws an exception saying that it's out of memory. At that point, I close the program, and Task Manager's physical memory meter drops back down to where it was before I started. But here's the catch - every time I try to run the program again, it will fail right off the bat to allocate memory to OpenCV, until I reboot the computer, at which point it will work just great for a few hundred images again.
Is there some way Windows could be holding on to that memory? Or is there another reason why Windows would fail to allocate memory to my program until a reboot occurs? This doesn't make sense to me.
EDIT: The computer I'm running this program on is Windows 7 64 bit with 32 GB of ram, so even with my program's memory issues, it's only using a small amount of the available memory. Normally the program maxes out at a little over 1 GB of ram before it quits.
EDIT 2: I'm also using FreeImage to load the images, I forgot to mention that. Here's the basis of my processing code:
//load bitmap with FreeImage
FIBITMAP *bitmap = NULL;
FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;
fif = FreeImage_GetFileType(filename.c_str(), 0);
bitmap = FreeImage_Load(fif, filename.c_str(), 0);
if (!bitmap) {
LogString("ScriptEngine: input file is not readable.");
processingFile = false;
return false;
}
//convert FreeImage bitmap to my custom wrapper for OpenCV::Mat
ScriptImage img;
img.image = fi2cv(bitmap);
FreeImage_Unload(bitmap);
try {
//this executes the AngelScript code
r = ctx->Execute();
} catch(std::exception e) {
std::cout << "Exception in " << __FILE__ << ", line " << __LINE__ << ", " << __FUNCTION__ << ": " << e.what() << std::endl;
}
try {
engine->GarbageCollect(asGC_FULL_CYCLE | asGC_DESTROY_GARBAGE);
} catch (std::exception e) {
std::cout << "Exception in " << __FILE__ << ", line " << __LINE__ << ", " << __FUNCTION__ << ": " << e.what() << std::endl;
}
As you can see, the only pointer is to the FIBITMAP, which is freed.
It is very likely that you are making a copy of the image data on this line:
img.image = fi2cv(bitmap);
Since you are immediately freeing the bitmap afterwards, that data must persist after the free.
Check if there is a resource release for ScriptImage objects.

Why does my output go to cout rather than to file?

I am doing some scientific work on a system with a queue. The cout gets output to a log file with name specified with command line options when submitting to the queue. However, I also want a separate output to a file, which I implement like this:
ofstream vout("potential.txt"); ...
vout<<printf("%.3f %.5f\n",Rf*BohrToA,eval(0)*hatocm);
However it gets mixed in with the output going to cout and I only get some cryptic repeating numbers in my potential.txt. Is this a buffer problem? Other instances of outputting to other files work... maybe I should move this one away from an area that is cout heavy?
You are sending the value returned by printf in vout, not the string.
You should simply do:
vout << Rf*BohrToA << " " << eval(0)*hatocm << "\n";
You are getting your C and C++ mixed together.
printf is a function from the c library which prints a formatted string to standard output. ofstream and its << operator are how you print to a file in C++ style.
You have two options here, you can either print it out the C way or the C++ way.
C style:
FILE* vout = fopen("potential.txt", "w");
fprintf(vout, "%.3f %.5f\n",Rf*BohrToA,eval(0)*hatocm);
C++ style:
#include <iomanip>
//...
ofstream vout("potential.txt");
vout << fixed << setprecision(3) << (Rf*BohrToA) << " ";
vout << setprecision(5) << (eval(0)*hatocm) << endl;
If this is on a *nix system, you can simply write your program to send its output to stdout and then use a pipe and the tee command to direct the output to one or more files as well. e.g.
$ command parameters | tee outfile
will cause the output of command to be written to outfile as well as the console.
You can also do this on Windows if you have the appropriate tools installed (such as GnuWin32).