storagefile::ReadAsync exception in c++/cx? - c++

I have been trying to use c++/cx StorageFile::ReadAsync() to read a file in a store-apps, but it always return an invalid params exception no matter what
// "file" are returned from FileOpenPicker
IRandomAccessStream^ reader = create_task(file->OpenAsync(FileAccessMode::Read)).get();
if (reader->CanRead)
{
BitmapImage^ b = ref new BitmapImage();
const int count = 1000000;
Streams::Buffer^ bb = ref new Streams::Buffer(count);
create_task(reader->ReadAsync(bb, 1, Streams::InputStreamOptions::None)).get();
}
I have turn on all the manifest capabilities and added "file open picker" + "file type association" for Declarations. Any ideas ? thanks!
ps: most solutions I found is for C#, but the code structure are similar...

If this code is executing on the UI thread (or in any other Single Threaded Apartment, or STA), then the calls to .get() will throw if the tasks have not yet completed, because the call to .get() would block the thread. You must not block the UI thread or any other STA, and when compiling with C++/CX support enabled, the libraries enforce this.
If you turn on first chance exception handling in the debugger (Debug -> Exceptions..., check the C++ Exceptions check box), you should see that the first exception to be thrown is an invalid_operation exception, from the following line in <ppltasks.h>:
// In order to prevent Windows Runtime STA threads from blocking the UI, calling
// task.wait() task.get() is illegal if task has not been completed.
if (!_IsCompleted() && !_IsCanceled())
{
throw invalid_operation("Illegal to wait on a task in a Windows Runtime STA");
}
The "invalid parameter" you are reporting is the fatal error that is caused when this exception reaches the ABI boundary: the debugger is notified that the application is about to terminate because this exception was unhandled.
You need to restructure your code to use continuations, using task::then, as described in the article Asynchronous Programming in C++ Using PPL

Just to make sure you understand the async pattern, what is happening in your code is that you call create_task and immediately after that task has started you are trying to get the result with .get(). Calls to .get() will throw immediately if the task is still running or the file could not be found. Therefore, the correct way of structuring this is using a .then on your file task, ensuring that you have the result of this task before starting the next one.
create_task(file->OpenAsync(FileAccessMode::Read)).then([](IRandomAccessStream^ reader)
{
//do stuff with the reader
});
At that point the reader is available so you can do whatever you want to, even start a new task.
Also, it is possible that the call to OpenAsync is failing cause the file is empty, I would add a try catch block to the previous task, the one that gets the file, just to make sure that's not the problem.

Related

Poco - failure to openApplication log causes subsystem shutdown failure

I'm using Poco 1.6.0 and the Util::ServerApplication structure.
At the start of int main(const ArgVec& args) in my main class, I redirect all the logging to a file:
Poco::AutoPtr<Poco::FileChannel> chanFile = new Poco::FileChannel;
chanFile->setProperty("path", "C:\\doesnotexist\\file.log");
Poco::Util::Application::instance().logger().setChannel(chanFile);
If the log file cannot be opened, this causes an exception to be thrown, which I catch, and return an error code from main(). The Application::run() code in Poco's Application.cpp then calls Application::uninitialize().
The implementation of Application::uninitialize()iterates through each SubSystem executing that subsystem's uninitialize().
But one of those is LogFile::uninitialize(), which causes the following message to be logged: Uninitializing subsystem: Logging Subsystem.
When it attempts to log that message, an exception is thrown since the log file could not be opened (for the same reason as before). That exception is caught somewhere in Poco's code and it attempts to log an error, which causes an exception, and that one finally terminates the program.
How should I deal with this issue? E.g. is it possible to tell the logging subsystem to not throw any exceptions?
There seems to be a greater issue too; if any subsystem uninitialize() throws, this will cause execution to leave the subsystem shutdown loop in Application.cpp , so other subsystems will not have a chance to shut down either.
You should make sure that the path exist before setting up the file channel, e.g.:
if (Poco::File("C:\\doesnotexist").exists())
{
Poco::AutoPtr<Poco::FileChannel> chanFile = new Poco::FileChannel;
chanFile->setProperty("path", "C:\\doesnotexist\\file.log");
Poco::Util::Application::instance().logger().setChannel(chanFile);
}
Application::unitialize() will loop through subsystems and log iterations as debug messages - the idea is to catch problems before release.
UPDATE: as pointed in the comments, the directory may exist at the time of the check but may not exist (or not be accessible) afterwards, when logging actually happens. There is nothing in Poco that shields user from that; so, you will have to make sure the directory exists and is accessible throughout the lifetime of the FileChannel using it. I have not found this to be an obstacle in practice. I did find the initial non-existence of a directory to be an annoying problem and there is a proposal for addition of such (optional/configurable) feature but it has not been scheduled yet for inclusion in upcoming releases.

Boost HTTP server issue

I'm starting to use Boost, so may be I'm messing something up.
I'm trying to set up http server with boost (ASIO). I've taken the code from docs: http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/examples/cpp03_examples.html (HTTP Server, the first one)
The only difference from the example is I'm running server by my own method "run" and starting io_service in background thread, like in the docs: http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/reference/io_service.html
boost::asio::io_service::work work(io_service_);
(Also I'm stopping io_service from my run method too.)
When I'm starting this modified server everything seems to be OK, run method is working fine. But then I'm trying to get a doc from the server the request hangs and control flow never comes to "request_handle" method.
Am I missing something?
UPD. Here is my code of run method:
void NetstreamServer::run()
{
LOG4CPLUS_DEBUG(logger, "NetstreamServer is running");
boost::asio::io_service::work work(io_service_);
try
{
while (true)
{
if (condition)
{
io_service_.stop();
break;
}
}
}
catch (std::exception const& e)
{
LOG4CPLUS_ERROR(logger, "NetstreamServer" << " caught exception: " << e.what());
}
}
You should call io_service_::run() - otherwise no one will dispatch the completion handlers of Asio objects serviced by io_service_.
Without including the code you changed, everyone here can only guess. Unfortunately you also do not include the compiler and the OS you are using. Even with boost claiming it is platform independent, you should always include this information, as it reality, platforms are different even with boost.
Let me do a guess. You use Microsoft Windows? How do you prevent the "main" function to exit? You moved the blocking "run" function out of it in another thread, the main function has no wait point anymore. Let me guess again, you used something like "getchar". With that, you can exit your server with only hitting the keyboard return key. If yes, the problem is the getchar, with unfortunately blocks every io of the asio socket implementation, but only on Windows based systems.
I would not need to guess if you would include the informations mentioned in your post. In particular all(!) changes you made to the code sample.

Compilation error with C++ Metro App Tutorial - task continuation

I am working through the "Tutorial: Create your first Metro style app using C++" on msdn (link). And shortly into "part 2" of it, I am running into an error. I'm running this on a Windows 8 VM Release Preview (May 31st) with the Visual Studio 2012 Release Candidate (the latest).
Where I'm at is in the code section after adding the 3 new metro pages, the ItemsPage, the SplitPage, and the new "DetailPage". Adding those went fine, but when I add the code directly below, in the section labeled "To modify the asynchronous code that initializes the data model" it creates two copies of the error below:
error C2893: Failed to specialize function template ''unknown-type' Concurrency::details::_VoidReturnTypeHelper(_Function,int,...)' c:\program files (x86)\microsoft visual studio 11.0\vc\include\ppltasks.h 404 1 SimpleBlogReader
Then I took out all the code from that section and started adding it a piece at a time to find out where the error "really" was, as I obviously hadn't modified that standard header file. It turns out it's in the task chain in the App::InitDataSource method:
SyndicationClient^ client = ref new SyndicationClient();
for(wstring url : urls)
{
// Create the async operation.
// feedOp is an IAsyncOperationWithProgress<SyndicationFeed^, RetrievalProgress>^
auto feedUri = ref new Uri(ref new String(url.c_str()));
auto feedOp = client->RetrieveFeedAsync(feedUri);
// Create the task object and pass it the async operation.
// SyndicationFeed^ is the type of the return value
// that the feedOp operation will eventually produce.
// Then, initialize a FeedData object with the feed info. Each
// operation is independent and does not have to happen on the
// UI thread. Therefore, we specify use_arbitrary.
create_task(feedOp).then([this] (SyndicationFeed^ feed) -> FeedData^
{
return GetFeedData(feed);
}, concurrency::task_continuation_context::use_arbitrary())
// Append the initialized FeedData object to the list
// that is the data source for the items collection.
// This has to happen on the UI thread. By default, a .then
// continuation runs in the same apartment thread that it was called on.
// Because the actions will be synchronized for us, we can append
// safely to the Vector without taking an explicit lock.
.then([fds] (FeedData^ fd)
{
fds->Append(fd);
// Write to VS output window in debug mode only. Requires <windows.h>.
OutputDebugString(fd->Title->Data());
OutputDebugString(L"\r\n");
})
// The last continuation serves as an error handler. The
// call to get() will surface any exceptions that were raised
// at any point in the task chain.
.then( [this] (concurrency::task<SyndicationFeed^> t)
{
try
{
t.get();
}
// SyndicationClient throws E_INVALIDARG
// if a URL contains illegal characters.
catch(Platform::InvalidArgumentException^ e)
{
// TODO handle error. For example purposes
// we just output error to console.
OutputDebugString(e->Message->Data());
}
}); //end task chain
I took out the lambdas one at a time (and put in the semicolon so it'd compile), and if I have the first two, it's fine, but the last one in the chain causes an error. But if I create_task just the last one, it compiles. Or if I do it with the first and third it compiles. Or with just the first two.
Is the problem the second lambda? Is the header library getting confused on the void return type? Or is it something else? Working on this theory, I modified the "final" handler declaration to this:
// The last continuation serves as an error handler. The
// call to get() will surface any exceptions that were raised
// at any point in the task chain.
.then( [this] (concurrency::task<void> t)
Now THIS compiles. But according to the doc at msdn (here), is this right? There's a section called "Value-Based Versus Task-Based Continuations" on that page that is copied below:
Given a task object whose return type is T, you can provide a value of
type T or task to its continuation tasks. A continuation that takes
type T is known as a value-based continuation. A value-based
continuation is scheduled for execution when the antecedent task
completes without error and is not canceled. A continuation that takes
type task as its parameter is known as a task-based continuation. A
task-based continuation is always scheduled for execution when the
antecedent task finishes, even when the antecedent task is canceled or
throws an exception. You can then call task::get to get the result of
the antecedent task. If the antecedent task was canceled, task::get
throws concurrency::task_canceled. If the antecedent task threw an
exception, task::get rethrows that exception. A task-based
continuation is not marked as canceled when its antecedent task is
canceled.
Is this saying that the final continuation for error-handling should be the type of the final .then continuation, or the type of the original create_task? If it's the final (as I did above with void), will this continuation actually handle all above errors, or only errors for the final .then call?
Is this the right way to "fix" their example? Or not?
I think the problem is that you need a return type from the second lambda that will be fed to the third one (see the matching FeedData for the return type of the first task and parameter type for the second). Since the second task does not return anything void seems the correct choice. As you want to use the third to capture errors, you will need to go with concurrency::task<void> (based on the quote).
Also, based on the quote, this final task will get called if antecedent (the second in this case) task failed and will report any errors that happened during its execution when t.get() is called. I'm not sure about the case when the first fails, but you can try what happens by throwing an arbirtary exception from the first task.

how should you handle file open error in a thread without cancellation?

I have an service application that reads a config file containing a list of files to open. The problem is when it doesn't find the file in the directory, it throws an exception thus cancelling the thread and stopping the service application as well.
Here's a code block of the function being called in a thread:
FILE* file_;
ServiceApp::File::File( const char* filename, const char* mode) :
file_(fopen(filename, mode))
{
if( !file_ )
{
// throwing will stop the service when file doesn't exist, what work around could we do?
throw std::runtime_error("file open failure");
}
Question:
How do we prevent this from happening so that when a file listed in the config file is not found in the directory, the application would just ignore it and continue with the process?
I would recommend using std::async. If a function throws which is run via std::async, the exception will be saved in the std::future. I will be rethrown when you call std::future::get(). However if you don't do that, it will just be "ignored" and thus you application will keep on running.
Example:
auto lambda = [] {
throw std::runtime_error("error");
};
auto handle = std::async(std::launch::async, lambda);
For more info on std::async read this.
The simplest way is to put try / catch statements to surround the block of code that processes the file.
There is a high chance that you will encounter other exceptions being thrown in other special cases (e.g. when reading network file that disappeared during reading), so check code for other exceptions carefully
The easiest would be just to try-catch your own exception, in the thread, then check for it being that exception (to ignore it), and rethrowing if it's some other exception.
I'm not sure if it's possible to add info to the exception where thrown or is it legacy code? In the latter case you'll have to resort to string comparison, which of course is ugly. Anyway.
// pseudo-code
while(GotFilesInQueue())
{
try
{
LoadNextFile();
}
catch(std::exception& e)
{
if(!IgnoreExceptionPredicate(e))
{
throw;
}
}
}
On Rethrowing

Handling Exceptions in a critical application that should not crash

I have a server application which I am debugging which basically parses scripts (VBscript, Python, Jscript and SQl) for the application that requests it.
This is a very critical application which, if it crashes causes havoc for a lot of users. The problem I am facing is how to handle exceptions so that the application can continue and the users know if something is wrong in their scripts.
An example: In the SQL scripts the application normally returns a set of values (Date, Number, String and Number). So the scripts have to have a statement at the end as such:
into dtDate, Number, Number, sString. These are values that are built into the application and the server application knows how to interpret these. These fields are treated in the server app as part of an array. The return values should normally be in a specific order as the indexes for these fields into the array are hardcoded inside the server application.
Now when a user writing a script forgets one of these fields, then the last field (normally string) throws an IndexOutofBoundsException.
The question is how does one recover from exceptions of this nature without taking down the application?
Another example is an error in a script for which no error parsing message can be generated. These errors just disappear in the background in the application and eventually cause the server app to crash. The scripts on which it fails don't necessarily fail to execute entirely, but part of it doesn't execute and the other parts do, which makes it look fairly odd to a user.
This server app is a native C++ application and uses COM technologies.
I was wondering if anyone has any ideas on what the best way is to handle exceptions such as the ones described above without crashing the application??
You can't handle problems like this with exceptions. You could have a top-level catch block that catches the exception and hope that not too much state of the program got irrecoverably munched to try to keep the program alive. Still doesn't make the user happy, that query she is waiting for still doesn't run.
Ensuring that changes don't destabilize a critical business app requires organization. People that sign-off on the changes and verify that they work as intended before it is allowed into production. QA.
since you talk about parsing different languages, you probably have something like
class IParser //parser interface
{
virtual bool Parse( File& fileToParse, String& errMessage ) = 0;
};
class VBParser : public Parser
class SQLParser : public Parser
Suppose the Parse() method throws an exception that is not handled, your entire app crashes. Here's a simplified example how this could be fixed at the application level:
//somewhere main server code
void ParseFileForClient( File& fileToParse )
{
try
{
String err;
if( !currentParser->Parse( fileToParse, err ) )
ReportErrorToUser( err );
else
//process parser result
}
catch( std::exception& e )
{
ReportErrorToUser( FormatExceptionMessage( err ) );
}
catch( ... )
{
ReportErrorToUser( "parser X threw unknown exception; parsing aborted" );
}
}
If you know an operation can throw an exception, then you need to add exception handling to this area.
Basically, you need to write the code in an exception safe manner which usually uses the following guidelines
Work on temporary values that can throw exceptions
Commit the changes using the temp values after (usually this will not throw an exception)
If an exception is thrown while working on the temp values, nothing gets corrupted and in the exception handling you can manage the situation and recover.
http://www.gotw.ca/gotw/056.htm
http://www.gotw.ca/gotw/082.htm
It really depends on how long it takes to start up your server application. It may be safer to let the application crash and then reload it. Or taking a cue from Chrome browser run different parts of your application in different processes that can crash. If you can safely recover an exception and trust that your application state is ok then fine do it. However catching std::exception and continuing can be risky.
There are simple to complex ways to baby sit processes to make sure if they crash they can be restarted. A couple of tools I use.
bluepill http://asemanfar.com/Bluepill:-a-new-process-monitoring-tool
pacemaker http://www.clusterlabs.org/
For simple exceptions that can happen inside your program due to user errors,
simply save the state that can be changed, and restore it like this:
SaveStateThatCanBeAlteredByScript();
try {
LoadScript();
} catch(std::exception& e){
RestoreSavedState();
ReportErrorToUser(e);
}
FreeSavedState();
If you want to prevent external code from crashing (possible untrustable code like plugins), you need an IPC scheme. On Windows, I think you can memory map files with OpenFile(). On POSIX-systems you can use sem_open() together with mmap().
If you have a server. You basically have a main loop that waits for a signal to start up a job. The signal could be nothing and your server just goes through a list of files on the file system or it could be more like a web server where it waits for a connection and executes the script provided on the connection (or any thing like that).
MainLoop()
{
while(job = jobList.getJob())
{
job.execute();
}
}
To stop the server from crashing because of the scripts you need to encapsulate the external jobs in a protected region.
MainLoop()
{
// Don't bother to catch exceptions from here.
// This probably means you have a programming error in the server.
while(job = jobList.getJob())
{
// Catch exception from job.execute()
// as these exceptions are generally caused by the script.
try
{
job.execute();
}
catch(MyServerException const& e)
{
// Something went wrong with the server not the script.
// You need to stop. So let the exception propagate.
throw;
}
catch(std::exception const& e)
{
log(job, e.what());
}
catch(...)
{
log(job, "Unknown exception!");
}
}
}
If the server is critical to your operation then just detecting the problem and logging it is not always enough. A badly written server will crash so you want to automate the recovery. So you should write some form of heartbeat processes that checks at regular intervals if the processes has crashed and if it has automatically restart it.