How to configure Lighthttpd with fastcgi - c++

I've installed LightTPD on windows.
It starts without fastcgi normally.
Then I copy/paste one of the fastCGI examples
#include <sstream> // manipulate strings (integer conversion)
#include <string> // work with strings in a more intuitive way
#include "libfcgi2.h" // Header file for libfcgi2.dll (linked with libfcgi2.lib)
using namespace std;
int main( )
{
printf("Start...\r\n");
FCGX_REQUEST Req = { 0 }; // Create & Initialize all members to zero
int count(0);
string sReply;
ostringstream ss;
FCGX_InitRequest( &Req, 0, 0 ); // FCGX_DEBUG - third parameter
// Open Database
while(true)
{
if( FCGX_Accept_r(&Req) < 0 ) break; // Execution is blocked here until a Request is received
count++;
ss << count; // Stringstream is a typesafe Integer conversion
sReply = "Content-Type: text/html\r\n\r\n Hello World " + ss.str();
FCGX_PutStr( sReply.data(), sReply.length(), Req.pOut );
ss.str(""); // clear the string stream object
}
// Close Database
printf("End...\r\n");
return 0;
}
and trying to start server with following config:
server.modules = (
...
"mod_fastcgi",
....
fastcgi.server = ( ".exe" =>
( "" =>
( "bin-path" => "C:\FastCGI\Examples\C++\Ex_Counter.exe",
"port" => 8080,
"min-procs" => 1,
"max-procs" => 1
)
)
)
And gain error on start up:
C:\LightTPD>LightTPD.exe -f conf\lighttpd-inc.conf -m lib -D
cygwin warning:
MS-DOS style path detected: conf\lighttpd-inc.conf
Preferred POSIX equivalent is: conf/lighttpd-inc.conf
CYGWIN environment variable option "nodosfilewarning" turns off this warning.
Consult the user's guide for more details about POSIX paths:
http://cygwin.com/cygwin-ug-net/using.html#using-pathnames
2011-10-07 12:50:25: (log.c.166) server started
2011-10-07 12:50:25: (mod_fastcgi.c.1367) --- fastcgi spawning local
proc: C:\FastCGI\Examples\C++\Ex_Counter.exe
port: 8080
socket
max-procs: 1
2011-10-07 12:50:25: (mod_fastcgi.c.1391) --- fastcgi spawning
port: 8080
socket
current: 0 / 1
2011-10-07 12:50:25: (mod_fastcgi.c.1104) the fastcgi-backend C:\FastCGI\Example
s\C++\Ex_Counter.exe failed to start:
2011-10-07 12:50:25: (mod_fastcgi.c.1108) child exited with status 0 C:\FastCGI\
Examples\C++\Ex_Counter.exe
2011-10-07 12:50:25: (mod_fastcgi.c.1111) If you're trying to run your app as a
FastCGI backend, make sure you're using the FastCGI-enabled version.
If this is PHP on Gentoo, add 'fastcgi' to the USE flags.
2011-10-07 12:50:25: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed.
2011-10-07 12:50:25: (server.c.942) Configuration of plugins failed. Going down.
I'm not using php, so i don't know where i should set this flag.

It appears that the example failed to call FCGX_Init() before it started using the FCGX library functions. This will cause FCGX_Accept_r to return a non 0 error condition and causes your example to exit with the 0 status indicated in the log file you are seeing.
From fcgiapp.h:
/*
*----------------------------------------------------------------------
*
* FCGX_Accept_r --
*
* Accept a new request (multi-thread safe). Be sure to call
* FCGX_Init() first.
*

Related

C/C++ - How to make a singleton connection module in Apache HTTP Server?

Let's say I have this code for my apache module:
#include <iostream>
#include <string>
#include <httpd.h>
#include <http_core.h>
#include <http_protocol.h>
#include <http_request.h>
#include <apr_strings.h>
int count = 0;
static void my_child_init(apr_pool_t *p, server_rec *s)
{
count = 1000; //starts up with this number!
}
static int my_handler(request_rec *r)
{
count++; //Increments here
ap_rputs(std::to_string(count).c_str(), r);
return OK;
}
static void register_hooks(apr_pool_t *pool)
{
ap_hook_child_init(my_child_init, NULL, NULL, APR_HOOK_MIDDLE);
ap_hook_handler(my_handler, NULL, NULL, APR_HOOK_LAST);
}
module AP_MODULE_DECLARE_DATA myserver_module =
{
STANDARD20_MODULE_STUFF,
NULL, // Per-directory configuration handler
NULL, // Merge handler for per-directory configurations
NULL, // Per-server configuration handler
NULL, // Merge handler for per-server configurations
NULL, // Any directives we may have for httpd
register_hooks // Our hook registering function
};
Now if I open my browser and go to localhost/my_server I see my count incrementing every time I refresh my page, creating a new HTTP request to Apache.
1001 //from connection 1
1002 //from connection 1
1003 //from connection 1
1004 //from connection 1
...
I was expecting that everytime I refresh, I see the count incrementing. But sometimes I see that apache probably created another connection and the module is instantiated again.. and I have now two equal connections running:
1151 //from connection 1
1152 //from connection 1
1001 // from connection 2
1153 //from connection 1
1002 // from connection 2
1003 // from connection 2
1154 //from connection 1
...
Is there anyway I prevent apache reloading the same module?
Most Apache MPM / common configurations will create multiple child processes. You can configure them to use a single process with many threads instead, or use shared memory for your counter.
The simplest way to use shared memory in a portable way is to depend on the "slotmem" and "slotmem_shm" modules. mod_proxy_balancer uses this. An alternate way is to how server/scoreboard.c uses shared memory directly.

gSOAP Chaining C++ Server Classes to Accept Messages on the Same Port Not Working

We have six WSDLs compiled within the same project, and due to some limits of hardware we can only open one port for listening.
For doing this, we choose the approach described by chapter 7.2.8 How to Chain C++ Server Classes to Accept Messages on the Same Port in this gSOAP Manual.
However, when using this approach in we encounter many sever issues:
1. If lots of requests arrive concurrently, then sometimes soap_begin_serve reports error with error=-1, socket is closed immediately by soap server after it is established
2. If we call xxx.destory() after soap_free_stream(), then soap_accept() will
report an error of bad file descriptor and not work anymore (solved)
Anybody knows what are the reasons of above phoenomenon? How to solve them?
Our code is very close to the example except a few changes, see below section.
//server thread
Abc::soapABCService server; // generated with soapcpp2 -i -x -n -QAbc
server.bind(NULL, 12345, 100);
server.soap_mode = SOAP_KEEP_ALIVE | SOAP_UTF_CSTRING;
server.recv_timeout = server.send_timeout = 60;
while (true)
{
server.accept();
...
pthread_create(&pid, NULL, handle_new_request, server.copy());
} // while(true)
// work thread - the thread function
void *handle_new_request(void* arg)
{
// generated with soapcpp2 -i -x -n -QAbc
Abc::soapABCService *abc = (Abc::soapABCService*)arg;
Uvw::soapUVWService uvw; // generated with soapcpp2 -i -x -n -QUvw
Xyz::soapXYZService xyz; // generated with soapcpp2 -i -x -n -QXyz
if(soap_begin_serve(abc))
{
//sometimes it reports error
//due to unkown reason, socket was closed by soap server
abc->soap_stream_fault(std::cerr);
}
else if (abc->dispatch() == SOAP_NO_METHOD)
{
soap_copy_stream(&uvw, abc);
uww.state = SOAP_COPY;
if (uvw.dispatch() == SOAP_NO_METHOD)
{
soap_copy_stream(&xyz, &uvw);
xyz.state = SOAP_COPY;
if (xyz.dispatch())
{
soap_send_fault(&xyz); // send fault to client
xyz.soap_stream_fault(std::cerr);
}
soap_free_stream(&xyz); // free the copy
xyz.destroy();
}
else
{
soap_send_fault(&uvw); // send fault to client
uvw.soap_stream_fault(std::cerr);
}
soap_free_stream(&uvw); // free the copy
uvw.destroy();
}
else if (abc->error)
{
abc->soap_stream_fault(std::cerr);
}
else
{
}
abc->destroy();
delete abc;
abc = NULL;
}
Finally I found the reason why some connections were closed by the server rightly after they were established.
It's not the gSOAP server's fault, it's because all connections were coming from a same machine, these clients were set up to reuse address and port reuse caused this problem.

Scala + ZMQ = Operation cannot be accomplished in current state

I am trying to get a Scala program to communicate with a c++ program via zeromq by using the request-reply pattern. The scala program should send a request to the C++ program which replies.
However I see the error
org.zeromq.ZMQException: Operation cannot be accomplished in current state
But all I can find in the docu is that one has to read responses before sending a second request. In my case I am issuing a request, followed by a reading of the response (this is where the exception is thrown).
Code of the server:
#include "zmq.hpp"
#include <string>
#include <iostream>
#include <thread>
int main()
{
zmq::context_t context(1);
zmq::socket_t socket(context, ZMQ_REP);
socket.bind("tcp://*:5555");
while (1) {
zmq::message_t request;
socket.recv(&request);
std::string requ = std::string(static_cast<char*>(request.data()), request.size());
std::cout << requ << std::endl;
// Write response
zmq::message_t req(2);
memcpy((void *)req.data(), "ok", 5);
socket.send(req);
}
}
Code of the client:
import org.zeromq.ZMQ
import org.zeromq.ZMQ.{Context, Socket}
object Adapter {
def main( args: Array[String] ) = {
val context = ZMQ.context(1)
val socket = context.socket(ZMQ.REQ)
println { "Connecting to backend" }
socket.connect("tcp://127.0.0.1:5555")
val request = "1 1 1 1".getBytes()
request(request.length - 1) = 0.toByte
println { "Sending Request" }
if (!socket.send(request, 0))
println{ "could not send"}
println { "Receiving Response" }
val reply = socket.recv(0)
println { "Received reply: " + new String(reply, 0, reply.length - 1) }
}
}
The complete output of sbt:
OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jna7980154308052950568.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Connecting to backend
Sending Request
Receiving Response
[error] (run-main-0) org.zeromq.ZMQException: Operation cannot be accomplished in current state
org.zeromq.ZMQException: Operation cannot be accomplished in current state
at org.zeromq.ZMQ$Socket.raiseZMQException(ZMQ.java:448)
at org.zeromq.ZMQ$Socket.recv(ZMQ.java:368)
at ZeroMQActor$.main(ZeroMQExample.scala:56)
at ZeroMQActor.main(ZeroMQExample.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 5 s, completed Jun 16, 2015 4:42:42 PM
Sbt pulls Scala 2.9.1 and akka-zeromq 2.0. I have installed zeromq 3.5 from source but I see the same behavior when I install the ubuntu package libzqm3-dev. One possible work-around is using JeroMQ, a pure java-based implementation of zmq, but I would prefer to depend on one zmq library in my whole stack rather than dealing with interop issues.
Thanks in advance.
I believe
memcpy((void *)req.data(), "ok", 5);
should be
memcpy((void *)req.data(), "ok", 2);
... which could be enough to break message handling.

ZeroMQ: Address in use error when re-binding socket

After binding a ZeroMQ socket to an endpoint and closing the socket, binding another socket to the same endpoint requires several attempts. The previous calls to zmq_bind up until the successful one fail with the error "Address in use" (EADDRINUSE).
The following code demonstrates the problem:
#include <cassert>
#include <iostream>
#include "zmq.h"
int main() {
void *ctx = zmq_ctx_new();
assert( ctx );
void *skt;
skt = zmq_socket( ctx, ZMQ_REP );
assert( skt );
assert( zmq_bind( skt, "tcp://*:5555" ) == 0 );
assert( zmq_close( skt ) == 0 );
skt = zmq_socket( ctx, ZMQ_REP );
assert( skt );
int fail = 0;
while ( zmq_bind( skt, "tcp://*:5555" ) ) { ++fail; }
std::cout << fail << std::endl;
}
I'm using ZeroMQ 4.0.3 on Windows XP SP3, compiler is VS 2008. libzmq.dll has been built with the provided Visual Studio solution.
This prints 1 here when doing a "Debug" build (both of the code above and of libzmq.dll) and 0 using a "Release" build. Strange enough, when running the code above with mixed build configuration (Debug with Release lib), fail counts up to 6.
Pieter Hintjens gave me the hint on the mailing list:
The call to zmq_close initiates the socket shutdown. This is done in a special "reaper" thread started by ZeroMQ to make the call to zmq_close asynchronous and non-blocking. See "The reaper thread" in a whitepaper about ZeroMQ's architecture.
The code above does not wait for the thread doing the actual work, so the endpoint will not become available immediately.
When a TCP socket is closed, it enters a state called TIME_WAIT. This means that while the socket is in that state, it's not really closed, and that in turn means that the address used by the socket is not available until it leave the state.
So if you run your program two times in close succession the socket will be in this TIME_WAIT state from the first run when you try the second run, and you get an error like this.
You might want to read more about TCP, and especially about its operation and states.

What's so special about file descriptor 3 on linux?

I'm working on a server application that's going to work on Linux and Mac OS X. It goes like this:
start main application
fork of the controller process
call lock_down() in the controller process
terminate main application
the controller process then forks again, creating a worker process
eventually the controller keeps forking more worker processes
I can log using several of methods (e.g. syslog or a file) but right now I'm pondering about syslog. The "funny" thing is that no syslog output is ever seen in the controller process unless I include the #ifdef section below.
The worker processes logs flawlessly in Mac OS X and linux with or without the ifdef'ed section below. The controller also logs flawlessly in Mac OS X without the #ifdef'ed section, but on linux the ifdef is needed if I want to see any output into syslog (or the log file for that matter) from the controller process.
So, why is that?
static int
lock_down(void)
{
struct rlimit rl;
unsigned int n;
int fd0;
int fd1;
int fd2;
// Reset file mode mask
umask(0);
// change the working directory
if ((chdir("/")) < 0)
return EXIT_FAILURE;
// close any and all open file descriptors
if (getrlimit(RLIMIT_NOFILE, &rl))
return EXIT_FAILURE;
if (RLIM_INFINITY == rl.rlim_max)
rl.rlim_max = 1024;
for (n = 0; n < rl.rlim_max; n++) {
#ifdef __linux__
if (3 == n) // deep magic...
continue;
#endif
if (close(n) && (EBADF != errno))
return EXIT_FAILURE;
}
// attach file descriptors 0, 1 and 2 to /dev/null
fd0 = open("/dev/null", O_RDWR);
fd1 = dup2(fd0, 1);
fd2 = dup2(fd0, 2);
if (0 != fd0)
return EXIT_FAILURE;
return EXIT_SUCCESS;
}
camh was close, but using closelog() was the idea that did the trick so the honor goes to jilles. Something else, aside from closing a file descriptor from under syslogs feet must go on though. To make the code work I added a call to closelog() just before the loop:
closelog();
for (n = 0; n < rl.rlim_max; n++) {
if (close(n) && (EBADF != errno))
return EXIT_FAILURE;
}
I was relying on a verbatim understanding of the manual page, saying:
The use of openlog() is optional; it will automatically be called by syslog() if necessary...
I interpreted this as saying that syslog would detect if the file descriptor was closed under it. Apparently it did not. An explicit closelog() on linux was needed to tell syslog that the descriptor was closed.
One more thing that still perplexes me is that not using closelog() prevented the first forked process (the controller) from even opening and using a log file. The following forked processes could use syslog or a log file with no problems. Maybe there are some caching effect in the filesystem that make the first forked process having an unreliable "idea" of which file descriptors are available, while the next set of forked process are sufficiently delayed to not be affected by this?
The special aspect of file descriptor 3 is that it will usually be the first file descriptor returned from a system call that allocates a new file descriptor, given that 0, 1 and 2 are usually set up for stdin, stdout and stderr.
This means that if any library function you have called allocates a file descriptor for its own internal purposes in order to perform its functions, it will get fd 3.
The openlog(3) library call will need to open /dev/log to communicate with the syslog daemon. If you subsequently close all file descriptors, you may break the syslog library functions if they are not written in a way to handle that.
The way to debug this on Linux is to use strace to trace the actual system calls that are being made; the use of a file descriptor for syslog then becomes obvious:
$ cat syslog_test.c
#include <stdio.h>
#include <syslog.h>
int main(void)
{
openlog("test", LOG_PID, LOG_LOCAL0);
syslog(LOG_ERR, "waaaaaah");
closelog();
return 0;
}
$ gcc -W -Wall -o syslog_test syslog_test.c
$ strace ./syslog_test
...
socket(PF_FILE, SOCK_DGRAM, 0) = 3
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
connect(3, {sa_family=AF_FILE, path="/dev/log"}, 16) = 0
send(3, "<131>Aug 21 00:47:52 test[24264]"..., 42, MSG_NOSIGNAL) = 42
close(3) = 0
exit_group(0) = ?
Process 24264 detached
syslog(3) may keep a file descriptor to syslogd's socket open; closing this under its feet is likely to cause problems. A closelog(3) call may help.
Syslog binds on a given descriptor at startup. Most of the time descriptor 3. If you close it no logs.
syslog-ng -d -v
Gives you more info about what it's doing behind the scenes.
The output should look like something like this:
binding fd 3, inetaddr: 0.0.0.0, port: 514
io.c: Preparing fd 3 for reading
io.c: Preparing fd 4 for reading
binding fd 5, unixaddr: /dev/log
io.c: listening on fd 5