I am using POCO 1.6.0. I am trying to write a service that receives a JSON message on a raw socket and parses it.
However, the only way that POCO's parser seems to work is to take an entire string as input, and either return the parsed result, or throw a "Syntax error" exception.
So this means I have to re-parse the whole message each time a new byte arrives on the socket; and also there is no way of distinguishing between an actual syntax error versus it just being an incomplete message so far.
The parseChar function looks nice but it is private. Is there any way to have the parser parse some of a message and remain in that state so that I can resume parsing by passing more data?
Also, is there any way to distinguish actual syntax errors from incomplete messages (and preferably get feedback about the exact nature of the syntax error).
Pseudocode:
Poco::JSON::Parser parser;
std::string input_buffer;
for(;;)
{
// (append byte(s) from socket into input_buffer)
// (return failure if this read times out after 5 seconds)
parser.reset();
try
{
parser.parse(input_buffer);
break;
}
catch(Poco::Exception &e)
{
// (abort, but we don't know if data incomplete or data malformed
}
}
Note: I realize that this problem could be mooted by having the client frame the entire message as described in this thread, however I was hoping to make things as simple as possible for the client by just having a correctly-formed packet be sufficient to define a frame (method 5 of that question).
There currently is no way to do either of the things you'd like to do. However, they are both reasonable requests and doable, so this was put on the TODO list for one of the upcoming releases.
Related
When I try to use the function uv_close((uv_handle_t*)client,NULL) in libuv library to actively close the TCP connection with the client, the error
"main: src/unix/core.c:117: uv_close: Assertion `!uv__is_closing(handle)' failed."
was reported. I search a lot online, but I still cannot find the correct way to solve the problem. I wish someone can tell me why this problem resulted and how to solve it.
You are trying to close a handle that is either already closed or in a closing state (that is, somewhere along the process that brings the handle from being alive to being closed).
As you can see from the code of libuv, the uv_close function starts as:
void uv_close(uv_handle_t* handle, uv_close_cb close_cb) {
assert(!uv__is_closing(handle));
handle->flags |= UV_CLOSING;
// ...
Where uv__is_closing is defined as:
#define uv__is_closing(h) \
(((h)->flags & (UV_CLOSING | UV_CLOSED)) != 0)
To sum up, as soon as uv_close is invoked on a handle, the UV_CLOSING flag is set and it's checked on subsequent calls to avoid multiple runs of the close function. In other terms, you can close a handle only once.
The error comes out because you are probably invoking uv_close more than once for a handle. However, it's hard to say without looking at the real code.
As a side note, you can use uv_is_closing to test your handle if you are in doubt. It's a kind of alias for uv__is_closing.
I don't find anything in Protocol Buffers documentation for exception handling in C++. In Javadoc it has clearly defined ones like InvalidProtocolBufferException, but not in C++.
Sometimes I ran my program and it crashes when it finds missing fields in what it thinks a valid message, then it simply stops and throws errors like this:
[libprotobuf FATAL google/protobuf/message_lite.cc:273] CHECK failed:
IsInitialized(): Can't serialize message of type "XXX" because it is
missing required fields: YY, ZZ
unknown file: Failure
C++ exception with description "CHECK failed: IsInitialized(): Can't
serialize message of type "XXX" because it is missing required fields:
YY, ZZ" thrown in the test body.
The source code of message_lite.cc all wrapped around with "GOOGLE_DCHECK" or "InitializationErrorMessage"...
My application does not allow exceptions like this to halt the program (not sure what the term is in C++ but basically no UncheckedExceptions), so I really need a way to catch these, log errors, and return gracefully, in case some messages got severely corrupted. Is there anyway doing that? Why do I see this post indicating some sort of google::protobuf::FatalException but I couldn't find documentations around it (only FatalException probably not enough also).
Thanks!
The failure you are seeing indicates that there is a bug in your program -- the program has requested to serialize a message without having filled in all the required fields first. Think of this like a segmentation fault. You shouldn't try to catch this exception -- you should instead fix your app so that the exception never happens in the first place.
Note that the check is a DCHECK, meaning it is only checked in debug builds. In your release builds (when NDEBUG is defined), this check will be skipped and the message will be written even though it is not valid. So, you don't have to worry about this crashing your application in production, only while debugging.
(Technically you could catch google::protobuf::FatalException, but the Protobuf code was not originally designed to be exception-safe. Originally, check failures would simply abort the program. It looks like FatalException was added recently, but since the code is not exception-safe, it's likely that you'll have memory leaks any time a FatalException is thrown. So, you probably should treat it like an abort().)
I solved that
my problem was same you.
if another thread change size of proto item while you do serialization then FatalException throw
then at first I copy that in another proto item then I do serialize that.
ProtoInput item; // it is global object
.
.
.
fstream output("myfile",
ios::out | ios::trunc | ios::binary);
ProtoInput in;
in.CopyFrom(item);
size_t size = in.ByteSizeLong();
void *buffer = malloc(size);
if (in.SerializeToArray(buffer, size) == true) {
output.write((char *) buffer, size);
}
output.close();
free(buffer);
I'm making a compiler in BNFC and it's got to a stage where it already compiles some stuff and the code works on my device. But before shipping it, I want my compiler to return proper error messages when the user tries to compile an invalid program.
I found how bison can write error on the stderr stream and I'm able to catch those. Now suppose the user's code has no syntax error, it just references an undefined variable, I'm able to catch this in my visitor, but I can't know what the line number was, how can I find the line number?
In bison you can access the starting and ending position of the current expression using the variable #$, which contains a struct with the members first_column, first_line, last_column and last_line. Similarly #1 etc. contain the same information for the sub-expressions $1 etc. respectively.
In order to have access to the same information later, you need to write it into your ast. So add a field to your AST node types to store the location and then set that field when creating the node in your bison file.
(previous answer is richer) but in some simple parsers if we declare
%option yylineno
in flex, and print it in yyerror,
yyerror(char *s) {
fprintf(stderr,"ERROR (line %d):before '%s'\n-%s",yylineno, yytext,s);
}
sometimes it help...
I am using libcurl for a small program that gets data from an input url. But i sometimes get an error from the pErrorBuffer like:
Failed writing to body (something != somethingelse)
What does this mean? I mean in what situation is this error created?
It means your write callback didn't return the same number of bytes as was passed into it!
Months ago I implemented a component which receives data via UDP-network, deserializes it via Boost::Serialization and starts working with the incoming objects.
After some time of using this component random crashes occured, which I could solve when finding out that someone else is sending data to my UDP-Port.
I solved this problem by simply adding a try/catch around the deserialization:
try
{
boost::archive::text_iarchive inputArchive(incomingData);
inputArchive >> givenElements; //the actual deserialization, here the exception has been thrown in the past
}
catch( boost::archive::archive_exception& ex )
{
std::cout << "Archive Exception during deserializing:" << std::endl;
std::cout << ex.what() << std::endl;
std::cout << "Incoming data had the following content:" << std::endl;
std::cout << dataStream.str() << std::endl;
}
The above code sorted out any foreign/corrupt data coming in via network and just deserializes data which was meant to be.
Back then I worked with an older Boost-Version (I don't know really well, 1.44, 1.42?) on a Linux-Machine.
Currently I have to use the component again on a Windows XP machine with a fairly new Boost 1.46.1.
Now the problem is, that the try/catch does not seem to filter the foreign/corrupt data anymore. As far as something from that code is incoming, my application crashes without any error-message.
It is not possible for me to change the Port I'm listening to. Besides that I want to create a robust application which ignores data it could not work with instead of crashing.
I'm now wondering if anyone has an idea why this effect occurs? Has Boost been getting less robust? Is this something with the OS? I have no idea and hope this is kind of a question someone who is "more into Boost" could answer.
My answer is not directly related to boost serialization but it is always a good idea to do some validation on incoming data from network before entering deeper logic.
Before diving deep into boost serialization I suggest you :
Check the size of UDP packet
If you are using some kind of header do some validation
Whatever seems appropriate for you case
and then try to deserialize the packet. This way you can filter out foreign packets yourself instead of *relying on boost.