What is the difference between fcgi_stdio and fcgiapp? - c++

I am trying to get started with fastcgi development, therefor I download the reference implementation of libfcgi and tried to get a testprogram to run with lighttpd. Since fcgi_stdio allows for cgi backwards compatibility, I decided to start with that.
However I could not get the examples/tiny-fcgi.c to work with lighttpd, it yielded an internal server errror 500, the same configuration allows to run the lighttpd example program (http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs_ModFastCGI, below C/C++ FastCGI on lightty named socket) flawless. However it is totally unclear to me why the default supplied example would not work.
Questions:
What is wrong with the examples/tiny-fcgi.c example from the reference implementation causing lighttpd to return error 500?
Which implementation of fastcgi is preferable for c++ development (fcgi_stdio, fcgiapp, Other)? (There is something for streams in the fcgi pack, however I failed to find good/any documentation)

I have tested the unmodified example (Ubuntu/13.10/amd-64, Apache/2.4.6, libapache2-mod-fastcgi/2.4.7~0910052141-1.1, libfcgi-dev/2.4.0-8.1ubuntu4) and it runs ok:
manuelz#garibaldi:~$ curl habrich/tiny-fcgi
<title>FastCGI Hello! (C, fcgi_stdio library)</title>
<h1>FastCGI Hello! (C, fcgi_stdio library)</h1>
Request number 2 running on host <i>habrich</i>
Answers:
Hard to tell without knowing your configuration, but here's a shot in the dark: getenv will return NULL if SERVER_NAME is undefined.
fcgiapp is the base implementation: I would use that one for new development. fcgi_stdio is a wrapper for compatibility with CGI: use that for migrating legacy CGI projects. Quote:
fcgi_stdio is implemented as a thin layer on top of fcgiapp
You can find decent documentation for fcgiapp in the fcgiapp.h header.

I do not know whether your problem has the same origin as mine had, but at least it has the same symptoms.
There are different versions of the tiny-fcgi example. I first tried the one given here (example 1). This failed with an internal server error. However, the example given here (which does basically the same) works. The small but crucial difference is that the working code uses
getenv("SERVER_NAME")
instead of
getenv("SERVER_HOSTNAME")

not exactly the same for me, but if i avoid using getenv the example runs without errors, have to discover why getenv leads into error

Related

"USBSerial" has no member "printf" on STM32F411CEU6 in PlatformIO / ststm32

Trying to run this USB Serial example (bottom) to learn MBED, but I get the following compilation error:
class "USBSerial" has no member "printf"
Is it possible it isn't implemented for the STM32F411? Or is this a problem with MBED itself?
Seems like this should be basic functionality. Not finding much useful info on google when searching for this error. Has anyone else seen this error before?
potentially useful details:
IDE: vscode/platformIO
platformio.ini:
[env:nucleo f411re]
platform = ststm32
framework = mbed
board = nucleo_f411re
monitor_speed = 115200
MBED version: 6.2 (as I recall from memory, though I doubt it matters since I checked the docs for a few versions and the API and example appears unchanged)
The method printf() (which is a C, not C++ concept anyway) does not exist, simple as that. Use sprintf() if that's what you're familar with, then USBSerial.write(), perhaps.

detecting if program is installed on machine

Say I have an application I write, that relies for some task on an externat app (lets call it "tool") that is installed on my machine. In my program, I call it with system( "tool myarguments" ); , works fine.
Now, I want to distribute my app. Of course, the end-user might not have "tool" installed on his machine, so I would like my app to check this, and printout a message for the user. So my question is:
Is there a portable way to check for the existence of an app on the machine? (assuming we know its name and it is accessible through the machine's shell).
Additional information: First idea was to check the existence of the binary file, but:
This is platform dependent,
depending on how it has been installed (build from sources, installed through package,...), it might not always be in the same place, although it can be accessed through local path.
My first opinion on this question is "No", but maybe somebody has an idea ?
Reference: system()
Related: stackoverflow.com/questions/7045879
If you use the Qt toolkit, QProcess may help you.
Edit: and look for QProcess::error() return value: if it is QProcess::FailedToStart , then either it is not installed, or you have insufficient permissions.
If running the tool without argument has no side-effect, and is expected to return an exit code of 0, you can use system("tool") to check tool's existence.
You can check whether the command has been found by checking system's return value like this:
int ret = system("tool");
if (ret != 0) {
std::cout << "tool is not here, move along\n";
}
It is portable in the sense that system is expected to return 0 if all goes well and the command return status is 0 too.
For example, on Linux, running system("non_existing_command") returns 0x7F00 (same type of value as returned by wait()).
On Windows, it returns 1 instead.

libPd and c++ wrapper implementation

I'm trying to use libPd, the wrapper for PureData.
But the documentation is poor and I'm not very into C++
Do you know how I can simply send a floating value to a Pd patch?
Do I need to install libPd or I can just include the files?
First of all, check out ofxpd. It has an excellent libpd implementaiton with OpenFrameworks. If you are starting with C++ you may want to start with OpenFrameworks since it has some great documentation and nice integration with Pd via the ofxpd extension.
There are two good references for getting started with libpd (though neither cover C++ in too much detail): the original article and Peter Brinkmann's book.
On the libpd wiki there is a page for getting started with libpd. The linked project at the bottom has some code snippets in main.cpp that demonstrate how to send floats to your Pd patch.
pd.sendBang("fromCPP");
pd.sendFloat("fromCPP", 100);
pd.sendSymbol("fromCPP", "test string");
In your Pd patch you'll set up a [receive fromCPP] and then these messages will register in your patch.
In order to get the print output you have to use the receivers from libpd in order to receiver the strings and then do something with them. libpd comes with PdBase, which is a great class for getting libpd up and running. PdBase has sendBang, sendFloat, sendMessage, and also has the receivers set up so that you can get output from your Pd patch.
if you want to send a value to a running instance of Pd (the standalone application), you could do so via Pd's networking facilities.
e.g.
[netreceive 65432 1]
|
[route value]
|
[print]
will receive data sent from the cmdline via:
echo "value 1.234567;" | pdsend 65432 localhost udp
you can also send multiple values at once, e.g.
echo "value 1.234567 3.141592;" | pdsend 65432 localhost udp
if you find pdsend to slow for your purposes (e.g. if you launch the executable for each message you want to send you have a considerable overhead!), you could construct the message directly in your application and use an ordinary UDP-socket to send FUDI-messages to Pd.
FUDI-messages really are simple text strings, with atoms separated by whitespace and a terminating semicolon, e.g.
accelerator 1.23 3.14 2.97; button 1;
you might also considering using OSC, but for this you will need some externals (OSC by mrpeach; net by mrpeach (or iemnet)) on the Pd side.
as for performance, i've been using the latter with complex tracking data (hundreds of values per frame at 125fps) and for streaming multichannel audio, so i don't think this is a problem.
if you are already using libPd and only want to communicate from the host-application, use Adam's solution (but your question is a bit vague about that, so i'm including this answer just in case)

UDP Streaming with ffmpeg - overrun_nonfatal option

I'm working on a software which uses FFMPEG C++ libs to make an acquisition from an UDP streaming.
FFMPEG (1.2) is implemented and running but I get some errors (acquisition crashes and restarts). The log displays the following message:
*Circular buffer overrun. To avoid, increase fifo_size URL option. To survive in such case, use overrun_nonfatal option*
I searched online for documentation about how to use this option, but I only got informations about how to use when running directly ffmpeg executable.
Would someone know how to set the correct option in my C++ code to:
- increase fifo_size
- use overrun_nonfatal option
Thanks
The same option works from command line or C++ libraries, you need to modify your UDP URL as follows:
If you original URL looks like this:
udp://#239.1.1.7:5107
Add the fifo_size and overrun parameters like this:
"udp://#239.1.1.7:5107?overrun_nonfatal=1&fifo_size=50000000"
Remember to escape the URL with quotes.
overrun_nonfatal=1 prevents ffmpeg from exiting, it can recover in most circumstances.
fifo_size=50000000 uses a 50MB udp input buffer (default 5MB)
The only documentation is in the source code:
http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavformat/udp.c;h=5b5c7cb7dfc1aed3f71ea0c3e980be54757d3c62;hb=dd0a9b78db0eeea72183bd3f5bc5fe51a5d3f537
I don't have enough reputation to comment the other answer, but if I did I would say that studying the source linked in the answer:
fifo_size is measured as multiples of 188 Byte (packets) according to the line:
s->circular_buffer_size = strtol(buf, NULL, 10)*188;
so whilst Grant is roughly correct that "default 5MB", because of the line:
s->circular_buffer_size = 7*188*4096;
If you want a circular buffer of 50MB you should really set the fifo_size parameter to something closer to 50*1024*1024/188 otherwise 50000000 will give 50000000*188 bytes which is closer to 8965MB!

Strange semantic error

I have reinstalled emacs 24.2.50 on a new linux host and started a new dotEmacs config based on magnars emacs configuration. Since I have used CEDET to some success in my previous workflow I started configuring it. However, there is some strange behaviour whenever I load a C++ source file.
[This Part Is Solved]
As expected, semantic parses all included files (and during the initial setup parses all files specified by the semantic-add-system-include variables), but it prints this an error message that goes like this:
WARNING: semantic-find-file-noselect called for /usr/include/c++/4.7/vector while in set-auto-mode for /usr/include/c++/4.7/vector. You should call the responsible function into 'mode-local-init-hook'.
In the above example the error is printed for the STL vector but a corresponding error message is printed for every file included by the one I'm visiting and any subsequent includes. As a result it takes quite a long time to finish and unfortunately the process is repeated any type I open a new buffer.
[This Problem Is Solved Too]
Furthermore it looks like the parsing doesn't really work as when I place the point above a non-c primitive type (i.e. not int,double,float, etc) instead of printing the type's definition in the modeline an error message like
Idle Service Error semantic-idle-local-symbol-highlight-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, (((0) \"IndexMap\"))"
Idle Service Error semantic-idle-summary-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, ((\"fXBetween\" 0 nil nil))"
where DEPFETResolutionAnalysis.cc is the file & buffer I'm currently editing and IndexMap and fXBetween are types defined in files included by the file I'm editing/some file included by the file I'm editing.
I have not tested any further features of CEDET/semantic as the problem is pretty annoying. My cedet config can be found here.
EDIT: With the help of Alex Ott I kinda solved the first problem. It was due to my horrible cedet initialisation. See his first answer for the proper way to configure CEDET!
There still remains the problem with the Idle Service Error (which, when enabling global-semantic-idle-local-symbol-highlight-mode, occurs permanently, not only when checking the definition of the type at point).
And there is the new problem of how to disable the site-wise init file(s).
EDIT2: I have executed semantic-debug-idle-function in a buffer where the problem occurs and it produces a ~700kb [sic!] output. It looks like it is performing some operations on a data container which, by the looks of it, contains information on all the symbols defined in the files parsed. As I have parsed a rather large package (~20Mb source files) this table is rather large. Can semantic handle a database that large or is this impossible and the reason of my problem?
EDIT3: Deleting the content of ~/.semanticdb and reparsing all includes did the trick. I still need to disable the site-wise init files but as this is not related to CEDET I will close this question (the question related to the site-wise init files can be found here).
You need to change your init file so it will perform loading of CEDET only once, not in the hook that will be called for each .h/.hpp/.c/.cpp files. You can change this config as the base, and read more in following article.
The problem that you have is caused because Semantic is trying to analyze header files, and when it tries to open them, then its initialization routines are called again, and again...
The first problem was solved by correctly configuring CEDET which is discribed on Alex Ott's homepage. His answer solves this first problem. The config file specified in his answer is a great start for a nice config; I have used the very same to config CEDET for my needs.
The second problem vanished once I updated CEDET from 1.1 to the bazaar (repository) version, which is explained here and in Alex' article. Additionaly one must delete the content of the directory ~/.semanticdb (which contains the semantic database and was corrupted I guess).
I'd like to thank Alex Ott for his help and sticking with me throughout my journey to the solution :)