How to run c++ module in apache? - c++

i have wrote the c++ module in apache. The following is the code ::
mod_foo.hpp
#ifndef MOD_FOO_HPP
#define MOD_FOO_HPP
#ifdef __cplusplus
#define EXTERN_C_BLOCK_BEGIN extern "C" {
#define EXTERN_C_BLOCK_END }
#define EXTERN_C_FUNC extern "C"
#else
#define EXTERN_C_BLOCK_BEGIN
#define EXTERN_C_BLOCK_END
#define EXTERN_C_FUNC
#endif
#include <httpd.h>
#include <http_protocol.h>
#include <http_config.h>**
#endif /* MOD_FOO_HPP */
mod_foo.c
#include "mod_foo.hpp"
EXTERN_C_FUNC
int foo_handler( request_rec* inpRequest )
{
int nReturnVal = DECLINED;
if ( inpRequest->handler != NULL && strcmp( inpRequest->handler, "foo" ) == 0 )
{
ap_rputs( "Hello World from FOO", inpRequest );
nReturnVal = OK;
}
return nReturnVal;
}
EXTERN_C_FUNC
void foo_hooks( apr_pool_t* inpPool )
{
ap_hook_handler( foo_handler, NULL, NULL, APR_HOOK_MIDDLE );
}
EXTERN_C_BLOCK_BEGIN
module AP_MODULE_DECLARE_DATA foo_module =
{
STANDARD20_MODULE_STUFF,
NULL,
NULL,
NULL,
NULL,
NULL,
foo_hooks
};
EXTERN_C_BLOCK_END
The module is compiling successfully and it's also installing on the apache server but when i restart the apache server after installing it The following Error occurs:
apache2: Syntax error on line 234 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/conf.d/foo.conf: API module structure 'foo_module' in file /usr/lib/apache2/modules/mod_foo.so is garbled - expected signature 41503232 but saw 41503234 - perhaps this is not an Apache module DSO, or was compiled for a different Apache version?
I added the LoadModule thing in httpd.conf to load the module but only c++ modules is giving this error. Any idea about how to resolve this problem?

I think the handler should be declared as static, this could cause fault, beside that you should add a prefix extern "C" in front of the module, but adding every function with a extern "C" prefix is unnecessary.

Related

C++ Unresolved External Symbols Embedded Lua ( based on longjmp issue ) ( not duplicate )

I will describe the problem as follows:
Compiler: Visual Studio 2019
The root of the problem is that longjump crashes the process because I manually map my code to the process.
The code works fine as follows, but crashes on any syntax error in the lua script due to longjump:
extern "C" {
#include "lua.h"
#include "lualib.h"
.....
}
I want C++ exceptions originating from:
#if defined(__cplusplus) && !defined(LUA_USE_LONGJMP) /* { */
/* C++ exceptions */
#define LUAI_THROW(L,c) throw(c)
#define LUAI_TRY(L,c,a) \
try { a } catch(...) { if ((c)->status == 0) (c)->status = -1; }
#define luai_jmpbuf int /* dummy variable */
#elif defined(LUA_USE_POSIX) /* }{ */
/* in POSIX, try _longjmp/_setjmp (more efficient) */
#define LUAI_THROW(L,c) _longjmp((c)->b, 1)
#define LUAI_TRY(L,c,a) if (_setjmp((c)->b) == 0) { a }
#define luai_jmpbuf jmp_buf
#else /* }{ */
/* ISO C handling with long jumps */
#define LUAI_THROW(L,c) longjmp((c)->b, 1)
#define LUAI_TRY(L,c,a) if (setjmp((c)->b) == 0) { a }
#define luai_jmpbuf jmp_buf
#endif /* } */
Because longjmp crashes my process.
So I decided to compile my code with the C++ compiler (without extern C) and:
#include "lua.h"
#include "lualib.h"
.....
This is how I called it. But this also led to the following problem:
error LNK2019: unresolved external symbol _lua_pcall
...
...
...
I thought a lot about it but couldn't find a solution.It's ridiculous that it's a linker error because all the lua header and c files are joined to my project.
#define LUAI_THROW(L,c) c->throwed = true
#define LUAI_TRY(L,c,a) \
__try { a } __except(filter()) { if ((c)->status == 0 && ((c)->throwed)) (c)->status = -1; }
#define luai_jmpbuf int /* dummy variable */
Btw, i solved my throwing exception issue like that. Im not sure its correct or no but not crashing anymore.
struct lua_longjmp {
struct lua_longjmp *previous;
luai_jmpbuf b;
volatile int status; /* error code */
bool throwed;
};
Works as expected even you build without C++ exceptions

How to initialize parameters depending on the environment that we call a program?

In a header file I have a parameter that specifies the name of a control file:
#define CTLFILE "server.ini"
This works fine. But now I want something like this:
If I am on the server
#define CTLFILE "server.ini"
else if I am on the client
#define CTLFILE "client.ini"
How can I implement this?
You can pass option when launch your program:
For example try to call the following program passing server or client:
#include <stdio.h>
#include <string.h>
#define SERVER_FILE "server.ini"
#define CLIENT_FILE "client.ini"
int main (int argc, char *argv[])
{
if (argc<2)
{
fprintf(stderr, "You mast pass type of envirnment\n!");
return 1;
}
if (strcmp(argv[1], "server") == 0)
{
printf ("File selected: %s\n", SERVER_FILE);
}
else if (strcmp(argv[1], "client") == 0)
{
printf ("File selected: %s\n", CLIENT_FILE);
}
else
{
fprintf(stderr, "Not supported environment %s", argv[0]);
return 1;
}
return 0;
}
You can use conditional compilation by using #ifdef....#endif pair.
For example, in the code, put it like
#ifdef SERVERSIDE
#define CTLFILE "server.ini"
#else
#define CTLFILE "client.ini"
#endif
Then, while compiling, pass -DSERVERSIDE option to the compiler (reference: gcc).
You can't do this that way, because #define and all #something are preprocessor instruction. That mean that after compilation, all #something are "gone", so you can't execute the same program differently with preprocessor instruction.
Many choice :
*) You compile twice the program with different #define CTLFILE.
*) You develop something like a configuration file in order to configure the execution of your program.
This will need extra development since you will have to dynamicly change string. It's up to you.
*) Just test for the existence of "server.ini" or "client.ini" file.
Work if the two file don't exist at the same time.

How to expose a function from C++ executable to LuaJIT

I am trying to load a Lua script into my c++ app and run it.
I decided to use LuaJIT to harness its FFI library.
But I have this weird problem where my Lua script cannot see the function symbols I defined in my C++ code and I get this error upon running my app:
undefined symbol: test_func_a
below is my C++ and Lua Code:
//C++//
#include <stdlib.h>
#include <stdio.h>
#include <assert.h>
#include <lua.hpp>
#ifdef __cplusplus
extern "C" {
#endif
void test_func_a ( void ) {
printf ( "hello\n" );
}
#ifdef __cplusplus
}
#endif
int main ( int argc, char** argv ) {
lua_State *lua = luaL_newstate();
assert ( lua );
luaL_openlibs ( lua );
const int status = luaL_dostring ( lua, lua_script_content );
if ( status )
printf ( "Couldn't execute LUA code: %s\n", lua_tostring ( lua, -1 ));
lua_close ( lua );
return 0;
}
//Lua//
local ffi = require('ffi');
ffi.cdef[[
void test_func_a (void);
]]
ffi.C.test_func_a()
by default gcc will export all symbols, how come luajit fails to see them?
use:
extern "C" __declspec(dllexport) void test_func_a ( void ) {printf ("hello\n" );}

c++ macro concatation not worked under gcc

#include <iostream>
void LOG_TRACE() { std::cout << "reach here"; }
#define LOG_LL_TRACE LOG_TRACE
#define LL_TRACE 0
#define __LOG(level) LOG_##level()
#define LOG(level) __LOG(##level)
int main()
{
LOG(LL_TRACE);
return 0;
}
The code is worked under Visual Studio, but report: test.cpp:13:1: error: pasting "(" and "LL_TRACE" does not give a valid preprocessing token.
How can I fix it?
ps: The macro expansion is supposed to be LOG(LL_TRACE) --> __LOG(LL_TRACE) --> LOG_LL_TRACE().
ps: suppose LL_TRACE must have a 0 value, do not remove it.
Two things make this code not compile on g++:
First, the error you're quoting is because you want to have this:
#define LOG(level) __LOG(level)
Notice no ##. Those hashmarks mean concatenate, but you're not concatenating anything. Just forwarding an argument.
The second error is that you have to remove
#define LL_TRACE 0
This line means you end up calling LOG(0) which expands into LOG_0 which isn't defined.
Shouldn't it be :
#define LOG(level) __LOG(level)
That works:
#include <iostream>
void LOG_TRACE() { std::cout << "reach here"; }
#define LOG_LL_TRACE LOG_TRACE
#define __LOG( level ) LOG_##level()
#define LOG(level) __LOG(level)
int main()
{
LOG( LL_TRACE );
return 0;
}

Ogre3D shows segfault error

I'm writing a game using Ogre3D and I have a problem.
When I starting program, it shows an segfault error:
*-*-* OGRE Initialising
*-*-* Version 1.7.2 (Cthugha)
Creating resource group Essential
Added resource location '../media/packs/SdkTrays.zip' of type 'Zip' to resource group 'Essential'
Added resource location '../media' of type 'FileSystem' to resource group 'General'
Added resource location '../media/materials/scripts' of type 'FileSystem' to resource group 'General'
Added resource location '../media/materials/textures' of type 'FileSystem' to resource group 'General'
Added resource location '../media/models' of type 'FileSystem' to resource group 'General'
Naruszenie ochrony pamięci [This means segfault]
And i don't know, why...
Code:
#define OGRE_CHANGE1 ((1 << 16) | (1 << 8))
#include "Ogre.h"
#include "ExampleApplication.h"
#if OGRE_PLATFORM == OGRE_PLATFORM_WIN32
#define WIN32_LEAN_AND_MEAN
#include "windows.h"
#else
#include <iostream>
#endif
class MyApp : public ExampleApplication
{
protected:
public:
MyApp()
{
}
~MyApp()
{
}
protected:
void createScene(void)
{
}
};
#ifdef __cplusplus
extern "C" {
#endif
#if OGRE_PLATFORM == OGRE_PLATFORM_WIN32
INT WINAPI WinMain( HINSTANCE hInst, HINSTANCE, LPSTR strCmdLine, INT )
#else
int main(int argc, char **argv)
#endif
{
MyApp App;
try
{
App.go();
return 0;
}
catch (Ogre::Exception& e)
{
#if OGRE_PLATFORM == OGRE_PLATFORM_WIN32
MessageBox( NULL, e.getFullDescription().c_str(), "Exception!",
MB_OK | MB_ICONERROR | MB_TASKMODAL);
#else
std::cerr <<"Exception:\n";
std::cerr <<e.getFullDescription().c_str() <<"\n";
#endif
return 1;
}
}
#ifdef __cplusplus
}
#endif
(Code partly from Ogre Wiki)
resources.cfg:
# Resources required by the sample browser and most samples.
[Essential]
Zip=../media/packs/SdkTrays.zip
# Resource locations to be added to the default path
[General]
FileSystem=../media
FileSystem=../media/materials/scripts
FileSystem=../media/materials/textures
FileSystem=../media/models
and plugins.cfg:
# Defines plugins to load
# Define plugin folder
PluginFolder=/usr/lib/OGRE
# Define plugins
# Plugin=RenderSystem_Direct3D9
# Plugin=RenderSystem_Direct3D10
# Plugin=RenderSystem_Direct3D11
Plugin=RenderSystem_GL
# Plugin=RenderSystem_GLES
Plugin=Plugin_ParticleFX
Plugin=Plugin_BSPSceneManager
Plugin=Plugin_CgProgramManager
Plugin=Plugin_PCZSceneManager
Plugin=Plugin_OctreeZone
Plugin=Plugin_OctreeSceneManager
And - when I comment Plugin=Plugin_CgProgramManager in plugins.cfg... Program works, but I need this plugin. :)
Please help.Thanks in advance.
Compile the program with debugging information included (with GCC, this means make sure the -g option is passed to the compiler).
Run it in a debugger.
When it crashes, you'll get a stack trace.
Investigate if it seems to depend on something you did (or not did, such as a missing initialization), or if it looks like a crash in Ogre3D proper.
If the former, fix it.
If the latter, report it.