I am trying to get a Scala program to communicate with a c++ program via zeromq by using the request-reply pattern. The scala program should send a request to the C++ program which replies.
However I see the error
org.zeromq.ZMQException: Operation cannot be accomplished in current state
But all I can find in the docu is that one has to read responses before sending a second request. In my case I am issuing a request, followed by a reading of the response (this is where the exception is thrown).
Code of the server:
#include "zmq.hpp"
#include <string>
#include <iostream>
#include <thread>
int main()
{
zmq::context_t context(1);
zmq::socket_t socket(context, ZMQ_REP);
socket.bind("tcp://*:5555");
while (1) {
zmq::message_t request;
socket.recv(&request);
std::string requ = std::string(static_cast<char*>(request.data()), request.size());
std::cout << requ << std::endl;
// Write response
zmq::message_t req(2);
memcpy((void *)req.data(), "ok", 5);
socket.send(req);
}
}
Code of the client:
import org.zeromq.ZMQ
import org.zeromq.ZMQ.{Context, Socket}
object Adapter {
def main( args: Array[String] ) = {
val context = ZMQ.context(1)
val socket = context.socket(ZMQ.REQ)
println { "Connecting to backend" }
socket.connect("tcp://127.0.0.1:5555")
val request = "1 1 1 1".getBytes()
request(request.length - 1) = 0.toByte
println { "Sending Request" }
if (!socket.send(request, 0))
println{ "could not send"}
println { "Receiving Response" }
val reply = socket.recv(0)
println { "Received reply: " + new String(reply, 0, reply.length - 1) }
}
}
The complete output of sbt:
OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jna7980154308052950568.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Connecting to backend
Sending Request
Receiving Response
[error] (run-main-0) org.zeromq.ZMQException: Operation cannot be accomplished in current state
org.zeromq.ZMQException: Operation cannot be accomplished in current state
at org.zeromq.ZMQ$Socket.raiseZMQException(ZMQ.java:448)
at org.zeromq.ZMQ$Socket.recv(ZMQ.java:368)
at ZeroMQActor$.main(ZeroMQExample.scala:56)
at ZeroMQActor.main(ZeroMQExample.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 5 s, completed Jun 16, 2015 4:42:42 PM
Sbt pulls Scala 2.9.1 and akka-zeromq 2.0. I have installed zeromq 3.5 from source but I see the same behavior when I install the ubuntu package libzqm3-dev. One possible work-around is using JeroMQ, a pure java-based implementation of zmq, but I would prefer to depend on one zmq library in my whole stack rather than dealing with interop issues.
Thanks in advance.
I believe
memcpy((void *)req.data(), "ok", 5);
should be
memcpy((void *)req.data(), "ok", 2);
... which could be enough to break message handling.
Related
I am trying to learn how to use gRPC asynchronously in C++. Going over the client example at https://github.com/grpc/grpc/blob/v1.33.1/examples/cpp/helloworld/greeter_async_client.cc
Unless I am misunderstanding, I don't see anything asynchronous being demonstrated. There is one and only one RPC call, and it blocks on the main thread until the server processes it and the result is sent back.
What I need to do is create a client that can make one RPC call, and then start another while waiting for the result of the first to come back from the server.
I've got no idea how to go about that.
Does anyone have a working example, or can anyone describe how to actually use gRPC asynchronously?
Their example code:
/*
*
* Copyright 2015 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include <iostream>
#include <memory>
#include <string>
#include <grpcpp/grpcpp.h>
#include <grpc/support/log.h>
#ifdef BAZEL_BUILD
#include "examples/protos/helloworld.grpc.pb.h"
#else
#include "helloworld.grpc.pb.h"
#endif
using grpc::Channel;
using grpc::ClientAsyncResponseReader;
using grpc::ClientContext;
using grpc::CompletionQueue;
using grpc::Status;
using helloworld::HelloRequest;
using helloworld::HelloReply;
using helloworld::Greeter;
class GreeterClient {
public:
explicit GreeterClient(std::shared_ptr<Channel> channel)
: stub_(Greeter::NewStub(channel)) {}
// Assembles the client's payload, sends it and presents the response back
// from the server.
std::string SayHello(const std::string& user) {
// Data we are sending to the server.
HelloRequest request;
request.set_name(user);
// Container for the data we expect from the server.
HelloReply reply;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
ClientContext context;
// The producer-consumer queue we use to communicate asynchronously with the
// gRPC runtime.
CompletionQueue cq;
// Storage for the status of the RPC upon completion.
Status status;
// stub_->PrepareAsyncSayHello() creates an RPC object, returning
// an instance to store in "call" but does not actually start the RPC
// Because we are using the asynchronous API, we need to hold on to
// the "call" instance in order to get updates on the ongoing RPC.
std::unique_ptr<ClientAsyncResponseReader<HelloReply> > rpc(
stub_->PrepareAsyncSayHello(&context, request, &cq));
// StartCall initiates the RPC call
rpc->StartCall();
// Request that, upon completion of the RPC, "reply" be updated with the
// server's response; "status" with the indication of whether the operation
// was successful. Tag the request with the integer 1.
rpc->Finish(&reply, &status, (void*)1);
void* got_tag;
bool ok = false;
// Block until the next result is available in the completion queue "cq".
// The return value of Next should always be checked. This return value
// tells us whether there is any kind of event or the cq_ is shutting down.
GPR_ASSERT(cq.Next(&got_tag, &ok));
// Verify that the result from "cq" corresponds, by its tag, our previous
// request.
GPR_ASSERT(got_tag == (void*)1);
// ... and that the request was completed successfully. Note that "ok"
// corresponds solely to the request for updates introduced by Finish().
GPR_ASSERT(ok);
// Act upon the status of the actual RPC.
if (status.ok()) {
return reply.message();
} else {
return "RPC failed";
}
}
private:
// Out of the passed in Channel comes the stub, stored here, our view of the
// server's exposed services.
std::unique_ptr<Greeter::Stub> stub_;
};
int main(int argc, char** argv) {
// Instantiate the client. It requires a channel, out of which the actual RPCs
// are created. This channel models a connection to an endpoint (in this case,
// localhost at port 50051). We indicate that the channel isn't authenticated
// (use of InsecureChannelCredentials()).
GreeterClient greeter(grpc::CreateChannel(
"localhost:50051", grpc::InsecureChannelCredentials()));
std::string user("world");
std::string reply = greeter.SayHello(user); // The actual RPC call!
std::cout << "Greeter received: " << reply << std::endl;
return 0;
}
You are right, this is a really bad example, it blocks and not async at all.
better look at this example: grpc/greeter_async_client2.
Here you can see in the main that they send the rpc messages in a loop in async non-blocking way:
Client Async send function:
void SayHello(const std::string& user) {
// Data we are sending to the server.
HelloRequest request;
request.set_name(user);
// Call object to store rpc data
AsyncClientCall* call = new AsyncClientCall;
call->response_reader =
stub_->PrepareAsyncSayHello(&call->context, request, &cq_);
// StartCall initiates the RPC call
call->response_reader->StartCall();
call->response_reader->Finish(&call->reply, &call->status, (void*)call);
}
Client Async receive function:
// Loop while listening for completed responses.
// Prints out the response from the server.
void AsyncCompleteRpc() {
void* got_tag;
bool ok = false;
// Block until the next result is available in the completion queue "cq".
while (cq_.Next(&got_tag, &ok)) {
// The tag in this example is the memory location of the call object
AsyncClientCall* call = static_cast<AsyncClientCall*>(got_tag);
if (call->status.ok())
std::cout << "Greeter received: " << call->reply.message() << std::endl;
else
std::cout << "RPC failed" << std::endl;
// Once we're complete, deallocate the call object.
delete call;
}
}
Main function:
int main(int argc, char** argv) {
GreeterClient greeter(grpc::CreateChannel(
"localhost:50051", grpc::InsecureChannelCredentials()));
// Spawn reader thread that loops indefinitely
std::thread thread_ = std::thread(&GreeterClient::AsyncCompleteRpc, &greeter);
for (int i = 0; i < 100; i++) {
std::string user("world " + std::to_string(i));
greeter.SayHello(user); // The actual RPC call!
}
std::cout << "Press control-c to quit" << std::endl << std::endl;
thread_.join(); //blocks forever
return 0;
}
addition
As #nmgeek noted, there is a potential memory leak in this solution, please see memory-leak-in-grpc-async-client.
I'm new to zmq and cppzmq. While trying to run the multithreaded example in the official guide: http://zguide.zeromq.org/cpp:mtserver
My setup
macOS Mojave, Xcode 10.3
libzmq 4.3.2 via Homebrew
cppzmq GitHub HEAD
I hit a few problems.
Problem 1
When running source code in the guide, it hangs forever without any stdout output shown up.
Here is the code directly copied from the Guide.
/*
Multithreaded Hello World server in C
*/
#include <pthread.h>
#include <unistd.h>
#include <cassert>
#include <string>
#include <iostream>
#include <zmq.hpp>
void *worker_routine (void *arg)
{
zmq::context_t *context = (zmq::context_t *) arg;
zmq::socket_t socket (*context, ZMQ_REP);
socket.connect ("inproc://workers");
while (true) {
// Wait for next request from client
zmq::message_t request;
socket.recv (&request);
std::cout << "Received request: [" << (char*) request.data() << "]" << std::endl;
// Do some 'work'
sleep (1);
// Send reply back to client
zmq::message_t reply (6);
memcpy ((void *) reply.data (), "World", 6);
socket.send (reply);
}
return (NULL);
}
int main ()
{
// Prepare our context and sockets
zmq::context_t context (1);
zmq::socket_t clients (context, ZMQ_ROUTER);
clients.bind ("tcp://*:5555");
zmq::socket_t workers (context, ZMQ_DEALER);
workers.bind ("inproc://workers");
// Launch pool of worker threads
for (int thread_nbr = 0; thread_nbr != 5; thread_nbr++) {
pthread_t worker;
pthread_create (&worker, NULL, worker_routine, (void *) &context);
}
// Connect work threads to client threads via a queue
zmq::proxy (static_cast<void*>(clients),
static_cast<void*>(workers),
nullptr);
return 0;
}
It crashes soon after I put a breakpoint in the while loop of the worker.
Problem 2
Noticing that the compiler prompted me to replace deprecated API calls, I modified the above sample code to make the warnings disappear.
/*
Multithreaded Hello World server in C
*/
#include <pthread.h>
#include <unistd.h>
#include <cassert>
#include <string>
#include <iostream>
#include <cstdio>
#include <zmq.hpp>
void *worker_routine (void *arg)
{
zmq::context_t *context = (zmq::context_t *) arg;
zmq::socket_t socket (*context, ZMQ_REP);
socket.connect ("inproc://workers");
while (true) {
// Wait for next request from client
std::array<char, 1024> buf{'\0'};
zmq::mutable_buffer request(buf.data(), buf.size());
socket.recv(request, zmq::recv_flags::dontwait);
std::cout << "Received request: [" << (char*) request.data() << "]" << std::endl;
// Do some 'work'
sleep (1);
// Send reply back to client
zmq::message_t reply (6);
memcpy ((void *) reply.data (), "World", 6);
try {
socket.send (reply, zmq::send_flags::dontwait);
}
catch (zmq::error_t& e) {
printf("ERROR: %X\n", e.num());
}
}
return (NULL);
}
int main ()
{
// Prepare our context and sockets
zmq::context_t context (1);
zmq::socket_t clients (context, ZMQ_ROUTER);
clients.bind ("tcp://*:5555"); // who i talk to.
zmq::socket_t workers (context, ZMQ_DEALER);
workers.bind ("inproc://workers");
// Launch pool of worker threads
for (int thread_nbr = 0; thread_nbr != 5; thread_nbr++) {
pthread_t worker;
pthread_create (&worker, NULL, worker_routine, (void *) &context);
}
// Connect work threads to client threads via a queue
zmq::proxy (clients, workers);
return 0;
}
I'm not pretending to have a literal translation of the original broken example, but it's my effort to make things compile and run without obvious memory errors.
This code keeps giving me error number 9523DFB (156384763in Hex) from the try-catch block. I can't find the definition of the error number in official docs, but got it from this question that it's the native ZeroMQ error EFSM:
The zmq_send() operation cannot be performed on this socket at the moment due to the socket not being in the appropriate state. This error may occur with socket types that switch between several states, such as ZMQ_REP.
I'd appreciate it if anyone can point out where I did wrong.
UPDATE
I tried polling according to #user3666197 's suggestion. But still the program hangs. Inserting any breakpoint effectively crashes the program, making it difficult to debug.
Here is the new worker code
void *worker_routine (void *arg)
{
zmq::context_t *context = (zmq::context_t *) arg;
zmq::socket_t socket (*context, ZMQ_REP);
socket.connect ("inproc://workers");
zmq::pollitem_t items[1] = { { socket, 0, ZMQ_POLLIN, 0 } };
while (true) {
if(zmq::poll(items, 1, -1) < 1) {
printf("Terminating worker\n");
break;
}
// Wait for next request from client
std::array<char, 1024> buf{'\0'};
socket.recv(zmq::buffer(buf), zmq::recv_flags::none);
std::cout << "Received request: [" << (char*) buf.data() << "]" << std::endl;
// Do some 'work'
sleep (1);
// Send reply back to client
zmq::message_t reply (6);
memcpy ((void *) reply.data (), "World", 6);
try {
socket.send (reply, zmq::send_flags::dontwait);
}
catch (zmq::error_t& e) {
printf("ERROR: %s\n", e.what());
}
}
return (NULL);
}
Welcome to the domain of the Zen-of-Zero
Suspect #1: the code jumps straight into an unresolveable live-lock due to a move into ill-directed state of the distributed-Finite-State-Automaton:
While I since ever advocate for preferring non-blocking .recv()-s, the code above simply commits suicide right by using this step:
socket.recv( request, zmq::recv_flags::dontwait ); // socket being == ZMQ_REP
kills all chances for any other future life but the very error The zmq_send() operation cannot be performed on this socket at the moment due to the socket not being in the appropriate state.
as
going into the .send()-able state is possible if and only if a previous .recv()-ed has delivered a real message.
The Best Next Step :
Review the code and may either use a blocking-form of the .recv() before going to .send() or, better, use a { blocking | non-blocking }-form of .poll( { 0 | timeout }, ZMQ_POLLIN ) before entering into an attempt to .recv() and keep doing other things, if there is nothing to receive yet ( so as to avoid the self suicidal throwing the dFSA into an uresolvable collision, flooding your stdout/stderr with a second-spaced flow of printf(" ERROR: %X\n", e.num() ); )
Error Handling :
Better use const char *zmq_strerror ( int errnum ); being fed by int zmq_errno (void);
The Problem 1 :
On the contrary to the suicidal ::dontwait flag in the Problem 2 root cause, the Problem 2 root cause is, that a blocking-form of the first .recv() here moves all the worker-threads into an undeterministically long, possibly infinite, waiting-state, as the .recv()-blocks proceeding to any further step until a real message arrives ( which it does not seem from the MCVE, that it ever will ) and so your pool-of-threads remains in a pool-wide blocked-waiting-state and nothing will ever happen until any message arrived.
Update on how the REQ/REP works :
The REQ/REP Scalable Communication Pattern Archetype works like a distributed pair of people - one, let's call her Mary, asks ( Mary .send()-s the REQ ), while the other one, say Bob the REP listens in a potentially infinitely long blocking .recv() ( or takes a due care, using .poll() to orderly and regularly check, if Mary has asked about something or not and continues to do his own hobbies or gardening otherwise ) and once the Bob's end gets a message, Bob can go and .send() Mary a reply ( not before, as he knows nothing when and what Mary would ( or would not ) ask in the nearer of farther future ) ) and Mary is fair not to ask her next REQ.send()-question to Bob anytime sooner but after Bob has ( REP.send() ) replied and Mary has received Bob's message ( REQ.recv() ) - which is fair and more symmetric, than a real life may exhibit among real people under one roof :o)
The code?
The code is not a reproducible MCVE. The main() creates five Bobs ( hanging waiting a call from Mary, somewhere over inproc:// transport-class ), but no Mary ever calls, or does she? Not visible sign of any Mary trying to do so, the less her ( their, could be a (even a dynamic) community of N:M herd-of-Mary(s):herd-of-5-Bobs relation ) attempt(s) to handle REP-ly(s) coming from either one of the 5-Bobs.
Persevere, ZeroMQ took me some time of scratching my own head, yet the years after I took a due care to learn the Zen-of-Zero are still a rewarding eternal walk in the Gardens of Paradise. No localhost serial-code IDE will ever be able to "debug" a distributed-system (unless a distributed-inspector infrastructure is inplace, a due architecture for a distributed-system monitor/tracer/debugger is another layer of distributed messaging/signaling layer atop of the debugged distributed messaging/signaling system - so do not expect it from a trivial localhost serial-code IDE.
If still in doubts, isolate potential troublemakers - replace inproc:// with tcp:// and if toys do not work with tcp:// (where one can wire-line trace the messages) it won't with inproc:// memory-zone tricks.
About the hanging that I saw in my UPDATED question, I finally figured out what's going on. It's a false expectation on my part.
This very sample code in my question is never meant to be a self-contained service/client code: It is a server-only app with ZMQ_REP socket. It just waits for any client code to send request through ZMQ_REQ sockets. So the "hang" that I was seeing is completely normal!
As soon as I hook up a client app to it, things start rolling instantly. This chapter is somewhere in the middle of the Guide and I was only concerned with multithreading so I skipped many code samples and messaging patterns, which led to my confusion.
The code comments even said it's a server, but I expected to see explicit confirmation from the program. So to be fair the lack of visual cue and the compiler deprecation warning caused me to question the sample code as a new user, but the story that the code tells is valid.
Such a shame on wasted time! But all of a sudden all #user3666197 says in his answer starts to make sense.
For the completeness of this question, the updated server thread worker code that works:
// server.cpp
void *worker_routine (void *arg)
{
zmq::context_t *context = (zmq::context_t *) arg;
zmq::socket_t socket (*context, ZMQ_REP);
socket.connect ("inproc://workers");
while (true) {
// Wait for next request from client
std::array<char, 1024> buf{'\0'};
socket.recv(zmq::buffer(buf), zmq::recv_flags::none);
std::cout << "Received request: [" << (char*) buf.data() << "]" << std::endl;
// Do some 'work'
sleep (1);
// Send reply back to client
zmq::message_t reply (6);
memcpy ((void *) reply.data (), "World", 6);
try {
socket.send (reply, zmq::send_flags::dontwait);
}
catch (zmq::error_t& e) {
printf("ERROR: %s\n", e.what());
}
}
return (NULL);
}
The much needed client code:
// client.cpp
int main (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to server
void *requester = zmq_socket (context, ZMQ_REQ);
zmq_connect (requester, "tcp://localhost:5555");
int request_nbr;
for (request_nbr = 0; request_nbr != 10; request_nbr++) {
zmq_send (requester, "Hello", 6, 0);
char buf[6];
zmq_recv (requester, buf, 6, 0);
printf ("Received reply %d [%s]\n", request_nbr, buf);
}
zmq_close (requester);
zmq_ctx_destroy (context);
return 0;
}
The server worker does not have to poll manually because it has been wrapped into the zmq::proxy.
This code reproduces the issue:
#include <iostream>
#include <xms.hpp>
int main()
{
try
{
xms::ConnectionFactory factory;
factory.setIntProperty(XMSC_CONNECTION_TYPE, XMSC_CT_WMQ);
factory.setIntProperty(XMSC_WMQ_CONNECTION_MODE, XMSC_WMQ_CM_CLIENT);
factory.setStringProperty(XMSC_WMQ_QUEUE_MANAGER, "my.queue.manager");
factory.setStringProperty(XMSC_WMQ_HOST_NAME, "my.dev.mq");
factory.setStringProperty(XMSC_WMQ_CHANNEL, "my.channel");
factory.setIntProperty(XMSC_WMQ_PORT, 1414);
std::cout << "Is Factory Null? => " << factory.isNull() << std::endl;
xms::Connection conn = factory.createConnection(); //THIS THROWS EXCEPTION
std::cout << "Is Factory Null? => " << factory.isNull() << std::endl;
}
catch(xms::Exception const & e)
{
e.dump(std::cout);
}
}
It throws exception, dumping the following message:
Is Factory Null? => 0
Exception:
ErrorData =
ErrorCode = 26 (XMS_E_CONNECT_FAILED)
JMS Type = 1 (XMS_X_GENERAL_EXCEPTION)
Linked Exception:
ErrorData = libmqic_r.so
ErrorCode = 0 (XMS_E_NONE)
JMS Type = 1 (XMS_X_GENERAL_EXCEPTION)
Any idea what is wrong with the code?
Note that devpmmq is just an alias to the actual IP address (host). If I put any random/nonsense value for it, I get the same error, which is bad because the API should have given a better error message such as "host not found" or something alone that line. Is there any way to enable more verbose diagnostics?
XMS C/C++ internally loads WebSphere MQ client libraries. The problem here is that XMS is unable to find WebSphere MQ client library libmqm_r.so.
What is puzzling me is that the code has set XMSC_WMQ_CONNECTION_MODE as XMSC_WMQ_CM_CLIENT but XMS is attempting to load libmqm_r.so. It should have attempted to load libmqic_r.so.
Have you installed WebSphere MQ client? Also what is the version of XMSClients can be downloaded from here.
I've installed LightTPD on windows.
It starts without fastcgi normally.
Then I copy/paste one of the fastCGI examples
#include <sstream> // manipulate strings (integer conversion)
#include <string> // work with strings in a more intuitive way
#include "libfcgi2.h" // Header file for libfcgi2.dll (linked with libfcgi2.lib)
using namespace std;
int main( )
{
printf("Start...\r\n");
FCGX_REQUEST Req = { 0 }; // Create & Initialize all members to zero
int count(0);
string sReply;
ostringstream ss;
FCGX_InitRequest( &Req, 0, 0 ); // FCGX_DEBUG - third parameter
// Open Database
while(true)
{
if( FCGX_Accept_r(&Req) < 0 ) break; // Execution is blocked here until a Request is received
count++;
ss << count; // Stringstream is a typesafe Integer conversion
sReply = "Content-Type: text/html\r\n\r\n Hello World " + ss.str();
FCGX_PutStr( sReply.data(), sReply.length(), Req.pOut );
ss.str(""); // clear the string stream object
}
// Close Database
printf("End...\r\n");
return 0;
}
and trying to start server with following config:
server.modules = (
...
"mod_fastcgi",
....
fastcgi.server = ( ".exe" =>
( "" =>
( "bin-path" => "C:\FastCGI\Examples\C++\Ex_Counter.exe",
"port" => 8080,
"min-procs" => 1,
"max-procs" => 1
)
)
)
And gain error on start up:
C:\LightTPD>LightTPD.exe -f conf\lighttpd-inc.conf -m lib -D
cygwin warning:
MS-DOS style path detected: conf\lighttpd-inc.conf
Preferred POSIX equivalent is: conf/lighttpd-inc.conf
CYGWIN environment variable option "nodosfilewarning" turns off this warning.
Consult the user's guide for more details about POSIX paths:
http://cygwin.com/cygwin-ug-net/using.html#using-pathnames
2011-10-07 12:50:25: (log.c.166) server started
2011-10-07 12:50:25: (mod_fastcgi.c.1367) --- fastcgi spawning local
proc: C:\FastCGI\Examples\C++\Ex_Counter.exe
port: 8080
socket
max-procs: 1
2011-10-07 12:50:25: (mod_fastcgi.c.1391) --- fastcgi spawning
port: 8080
socket
current: 0 / 1
2011-10-07 12:50:25: (mod_fastcgi.c.1104) the fastcgi-backend C:\FastCGI\Example
s\C++\Ex_Counter.exe failed to start:
2011-10-07 12:50:25: (mod_fastcgi.c.1108) child exited with status 0 C:\FastCGI\
Examples\C++\Ex_Counter.exe
2011-10-07 12:50:25: (mod_fastcgi.c.1111) If you're trying to run your app as a
FastCGI backend, make sure you're using the FastCGI-enabled version.
If this is PHP on Gentoo, add 'fastcgi' to the USE flags.
2011-10-07 12:50:25: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed.
2011-10-07 12:50:25: (server.c.942) Configuration of plugins failed. Going down.
I'm not using php, so i don't know where i should set this flag.
It appears that the example failed to call FCGX_Init() before it started using the FCGX library functions. This will cause FCGX_Accept_r to return a non 0 error condition and causes your example to exit with the 0 status indicated in the log file you are seeing.
From fcgiapp.h:
/*
*----------------------------------------------------------------------
*
* FCGX_Accept_r --
*
* Accept a new request (multi-thread safe). Be sure to call
* FCGX_Init() first.
*
I'm running a server and a client. i'm testing my program on my computer.
this is the funcion in the server that sends data to the client:
int sendToClient(int fd, string msg) {
cout << "sending to client " << fd << " " << msg <<endl;
int len = msg.size()+1;
cout << "10\n";
/* send msg size */
if (send(fd,&len,sizeof(int),0)==-1) {
cout << "error sendToClient\n";
return -1;
}
cout << "11\n";
/* send msg */
int nbytes = send(fd,msg.c_str(),len,0); //CRASHES HERE
cout << "15\n";
return nbytes;
}
when the client exits it sends to the server "BYE" and the server is replying it with the above function. I connect the client to the server (its done on one computer, 2 terminals) and when the client exits the server crashes - it never prints the 15.
any idea why ? any idea how to test why?
thank you.
EDIT: this is how i close the client:
void closeClient(int notifyServer = 0) {
/** notify server before closing */
if (notifyServer) {
int len = SERVER_PROTOCOL[bye].size()+1;
char* buf = new char[len];
strcpy(buf,SERVER_PROTOCOL[bye].c_str()); //c_str - NEED TO FREE????
sendToServer(buf,len);
delete[] buf;
}
close(_sockfd);
}
btw, if i skipp this code, meaning just leave the close(_sockfd) without notifying the server everything is ok - the server doesn't crash.
EDIT 2: this is the end of strace.out:
5211 recv(5, "BYE\0", 4, 0) = 4
5211 write(1, "received from client 5 \n", 24) = 24
5211 write(1, "command: BYE msg: \n", 19) = 19
5211 write(1, "BYEBYE\n", 7) = 7
5211 write(1, "response = ALALA!!!\n", 20) = 20
5211 write(1, "sending to client 5 ALALA!!!\n", 29) = 29
5211 write(1, "10\n", 3) = 3
5211 send(5, "\t\0\0\0", 4, 0) = 4
5211 write(1, "11\n", 3) = 3
5211 send(5, "ALALA!!!\0", 9, 0) = -1 EPIPE (Broken pipe)
5211 --- SIGPIPE (Broken pipe) # 0 (0) ---
5211 +++ killed by SIGPIPE +++
broken pipe can kill my program?? why not just return -1 by send()??
You may want to specify MSG_NOSIGNAL in the flags:
int nbytes = send(fd,msg.c_str(), msg.size(), MSG_NOSIGNAL);
You're getting SIGPIPE because of a "feature" in Unix that raises SIGPIPE when trying to send on a socket that the remote peer has closed. Since you don't handle the signal, the default signal-handler is called, and it aborts/crashes your program.
To get the behavior your want (i.e. make send() return with an error, instead of raising a signal), add this to your program's startup routine (e.g. top of main()):
#include <signal.h>
int main(int argc, char ** argv)
{
[...]
signal(SIGPIPE, SIG_IGN);
Probably the clients exits before the server has completed the sending, thus breaking the socket between them. Thus making send to crash.
link
This socket was connected but the
connection is now broken. In this
case, send generates a SIGPIPE signal
first; if that signal is ignored or
blocked, or if its handler returns,
then send fails with EPIPE.
If the client exits before the second send from the server, and the connection is not disposed of properly, your server keeps hanging and this could provoke the crash.
Just a guess, since we don't know what server and client actually do.
I find the following line of code strange because you define int len = msg.size()+1;.
int nbytes = send(fd,msg.c_str(),len,0); //CRASHES HERE
What happens if you define int len = msg.size();?
If you are on Linux, try to run the server inside strace. This will write lots of useful data to a log file.
strace -f -o strace.out ./server
Then have a look at the end of the log file. Maybe it's obvious what the program did and when it crashed, maybe not. In the latter case: Post the last lines here.