How to use C++ optimization flags in SWIG? - c++

I am creating a python module that is implemented in C++. I am using SWIG to create the interface. There are various ways to create the extension, I'm using the "preferred approach," which is via python's distutils and which is described here. The name of my module is "ParseEvents," and to compile it I run the following two commands:
swig -c++ -python ParseEvents.i
python setup.py build_ext --inplace
The first command creates a file ParseEvents_wrap.cxx
The second command uses the following setup.py file:
from distutils.core import setup, Extension
ParseEvents_module = Extension('_ParseEvents',
sources=['ParseEvents_wrap.cxx',],
extra_compile_args=["-Wno-deprecated","-O3"],
)
setup (name = 'ParseEvents',
ext_modules = [ParseEvents_module,],
py_modules = ["ParseEvents"]
)
Question: Where and how do I specify that I want my C++ code to be compiled with the -O3 compiler tag? I guessed that it would just be in the "extra_compile_args" part of the setup.py file, but that doesn't seem to be the case. When I run the second command (python setup.py build_ext --inplace), here's the output:
running build_ext
building '_ParseEvents' extension
creating build
creating build/temp.linux-x86_64-2.6
gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC -I/usr/include/python2.4 -c ParseEvents_wrap.cxx -o build/temp.linux-x86_64-2.4/ParseEvents_wrap.o -Wno-deprecated -O3
c++ -pthread -shared build/temp.linux-x86_64-2.4/ParseEvents_wrap.o -o _ParseEvents.so
Note that both the -O2 and -O3 flags are present in the second to last line in the output---I'd like to remove the -O2.

The GCC doc explicitly says:
http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Optimize-Options.html
If you use multiple -O options, with or without level numbers, the last such option is the one that is effective.
This means your code will be compiled with -O3 in effect, just as you want it. No need to bother for duplicate optimization flags.

Distutils has the lovely feature of providing all the same flags that Python was compiled with. The result is that adding extra flags is easy, but removing them is a total pain. Doing so involves subclassing the compiler class, catching the arguments and manually removing the offending flag from the argument list used by the compile function. That's the theory anyway, the docs are too poor to actually guide you through what you have to do to make that happen.
But like Luther said, in your case the extra -O2 doesn't hurt anything.

Related

GCC with -std=c++11 does not see C++ header files (via PyDSTool)

I am on a complex project built in python2.7 that uses the PyDSTool package for analysis of dynamical system. PyDSTool provides two C-based integrators - Radau and Dopri - which I want to use to integrate my system of equations whose source is coded in a bunch of C/C++ files.
I have little control on the package, and when I instantiate the integrator, I can only add headers *.H files, source files (*.C, *.CPP) and pass the directories to include in the search path of the compiler as well as libraries to link to.
Since a consistent part of the code is based on C++11 I am passing to the compiler also the argument -std=C++11.
Eventually, /PyDSTool/Generators/mixins.py launch a setup command (line 185) which in turn runs the command build_ext from distutils to which all the above flags are appended.
For the sake of clarity: the flags that I am appending are:
compile options: '-I/usr/lib64/python2.7/site-packages/numpy/core/include -I/home/maurizio/Dropbox/StabilityAnalysis_tmp -I/usr/local/pydstool/PyDSTool/integrator -I/usr/include/python2_7 -I/usr/include/numpy -I/home/maurizio/Dropbox/Ongoing_Projects/pycustommodules -I/home/maurizio/Dropbox/Ongoing_Projects/c_libraries -I/home/maurizio/Dropbox/Ongoing_Projects/c_libraries/models -I/home/maurizio/Dropbox/Ongoing_Projects/DePitta_PNAS/Software/Stability_Analysis/ -I/usr/lib64/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c'
extra options: '-std=c++11 -w -Wno-return-type -Wall -lpython2.7 -lm -lgsl -lgslcblas -D__DOPRI__'
The resulting compilation command as issued by PyDSTool reads:
error: Command "gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/lib64/python2.7/site-packages/numpy/core/include -I/home/maurizio/Dropbox/StabilityAnalysis_tmp -I/usr/local/pydstool/PyDSTool/integrator -I/usr/include/python2_7 -I/usr/include/numpy -I/home/maurizio/Dropbox/Ongoing_Projects/pycustommodules -I/home/maurizio/Dropbox/Ongoing_Projects/c_libraries -I/home/maurizio/Dropbox/Ongoing_Projects/c_libraries/models -I/home/maurizio/Dropbox/Ongoing_Projects/DePitta_PNAS/Software/Stability_Analysis/ -I/usr/lib64/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c /home/maurizio/Dropbox/StabilityAnalysis_tmp/dop853_temp/ei_network_vf.c -o /home/maurizio/Dropbox/StabilityAnalysis_tmp/dop853_temp/home/maurizio/Dropbox/StabilityAnalysis_tmp/dop853_temp/ei_network_vf.o -std=c++11 -w -Wno-return-type -Wall -lpython2.7 -lm -lgsl -lgslcblas -D__DOPRI__" failed with exit status 1
Once looking into the build.log file automatically generated by PyDSTool, it turns out that the exit status is due to the fact that the compiler does not see the C++ libraries that are in several routines/libs used by my code, e.g.
/usr/include/blitz/blitz.h:45:18: fatal error: string: No such file or directory
#include <string>
^
Compilation Terminated
Now, it is not a problem of my code, because if I compile my code as a standalone in python or through scipy.weave with the same compile and extra options pasted above, it works. It is a problem of making PyDSTool build the code within the integrator. As I am NOT practical with distutils and all gcc options I hope there is some expert here that could provide me with some insight. I suspect in fact that I am missing some options or whatever to pass to the compiler.
Just for the sake of completeness. The issue I pointed out above does not have an easy workaround. PyDSTool C-based integrators (i.e. Radau and Dopri) cannot be compiled with source code for the equations in C++ but only in C. So either you recast your code in plain C or try to edit PyDSTool integrators and recast them in C++. The first option is likely the only one currently possible (at least to some non-experts as who is writing).

Setting Up CPLEX in Eclipse C++ on Linux

I have installed CPLEX 12.6.3 (CPLEX_Studio_Community1263) and I want to integrate CPLEX in my Eclipse C++ project (on Linux). But I don't know which steps I have to follow to include CPLEX in my project.
Even by following exactly the steps shown at this link, it still not working for me (I can't import cpelx.jar in my project). The path of my cplex.jar is
/opt/ibm/ILOG/CPLEX_Studio_Community1263/cplex/lib/cpelx.jar
When I right-click on my project and go to
Properties --> Settings --> GCC C++ Linker --> Libraries
to add the cplex.jar in my project, it is impossible to add the .jar because I can't select it (it is deselected and impossible to select it).
Can some one explain me how I can include CPLEX in my project?
The link you reference is for setting up a Java program. This will not help you.
Instead, you should try running one of the C++ examples shipped with CPLEX. Try the following (assuming your path is correct from above):
$ cd /opt/ibm/ILOG/CPLEX_Studio_Community1263/cplex/examples/x86-64_linux/static_pic
$ make ilolpex1 2>&1 | tee output.txt
This will save the output in output.txt so that you can look at it later. It should give you an idea of what the required command line arguments are.
For example, on my system (x86-64_linux), I see this in the output:
$ make ilolpex1
g++ -O0 -c -m64 -O -fPIC -fno-strict-aliasing -fexceptions -DNDEBUG -DIL_STD -I../../../include -I../../../../concert/include ../../../examples/src/cpp/ilolpex1.cpp -o ilolpex1.o
g++ -O0 -m64 -O -fPIC -fno-strict-aliasing -fexceptions -DNDEBUG -DIL_STD -I../../../include -I../../../../concert/include -L../../../lib/x86-64_linux/static_pic -L../../../../concert/lib/x86-64_linux/static_pic -o ilolpex1 ilolpex1.o -lconcert -lilocplex -lcplex -lm -lpthread
This tells you everything you need to know to compile and link your program. You'll just need to figure out where to enter this information in Eclipse.

pylab install error gcc windows 10

So I recently installed Theano on a python 2.7 environment within Anaconda 3, on windows 10. Theano passed theano.test() at least. I am using example code from deeplearning.net. I heva sucessfully run the first block on the linked page which defines a Theano function. When I go to install pylab via pip install pylab so that I can use skimage for the second block, my installer quits while doing some gcc call for the portion that looks like it says "shared geometry". One thing I noticed right away is the -DEBUG flag is misspelled -DDEBUG. Could this be the cause? Does it have something to do with msvcr90.dll and if so what is that and what do I need to do about it? Also, probably important, I'm using a slightly (6 months or so) outdated TDM-GCC which is 4.9 something. Here is the line in question, as well as a few others that might be interesting:
`copying skimage\_shared\tests\__init__.py -> build\lib.win-amd64-2.7\skimage\_shared\tests
running build_ext
Looking for python27.dll
Cannot build msvcr library: "msvcr90d.dll" not found
customize Mingw32CCompiler
customize Mingw32CCompiler using build_ext
building 'skimage._shared.geometry' extension
compiling C sources
C compiler: gcc -g -DDEBUG -DMS_WIN64 -O0 -Wall -Wstrict-prototypes
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
creating build\temp.win-amd64-2.7\Release\skimage
creating build\temp.win-amd64-2.7\Release\skimage\_shared
compile options: '-DNPY_MINGW_USE_CUSTOM_MSVCR -D__MSVCRT_VERSION__=0x0900 -I"C:\Users\USE DIS\Anaconda3\envs\py27\lib\site-packages\numpy\core\include" -I"C:\Users\USE DIS\Anaconda3\envs\py27\include" -I"C:\Users\USE DIS\Anaconda3\envs\py27\PC" -c'
gcc -g -DDEBUG -DMS_WIN64 -O0 -Wall -Wstrict-prototypes -DNPY_MINGW_USE_CUSTOM_MSVCR -D__MSVCRT_VERSION__=0x0900 -I"C:\Users\USE DIS\Anaconda3\envs\py27\lib\site-packages\numpy\core\include" -I"C:\Users\USE DIS\Anaconda3\envs\py27\include" -I"C:\Users\USE DIS\Anaconda3\envs\py27\PC" -c skimage\_shared\geometry.c -o build\temp.win-amd64-2.7\Release\skimage\_shared\geometry.o
Found executable C:\Users\USE DIS\Anaconda3\envs\py27\Scripts\gcc.bat
'C:\Users\USE' is not recognized as an internal or external command,
operable program or batch file.
error: Command "gcc -g -DDEBUG -DMS_WIN64 -O0 -Wall -Wstrict-prototypes -DNPY_MINGW_USE_CUSTOM_MSVCR -D__MSVCRT_VERSION__=0x0900 -I"C:\Users\USE DIS\Anaconda3\envs\py27\lib\site-packages\numpy\core\include" -I"C:\Users\USE DIS\Anaconda3\envs\py27\include" -I"C:\Users\USE DIS\Anaconda3\envs\py27\PC" -c skimage\_shared\geometry.c -o build\temp.win-amd64-2.7\Release\skimage\_shared\geometry.o" failed with exit status 1
----------------------------------------

Can't make automake to use C++11

I'm working on some project and it seems that I can't make automake script to use C++11.
In rootdir of my project I have file Makefile.am which look like this (it was automade by eclipse):
SUBDIRS=src
Then in /rootdir/src I have Makefile.am that looks like this:
AM_CXXFLAGS=-Wall -fPIC -std=gnu++11 -DVERSION=\"$(VERSION)\" -DPROG="\"$(PACKAGE)\""
bin_PROGRAMS = algatorc
algatorc_SOURCES = algatorc.cpp
include_HEADERS = Timer.hpp TestSetIterator.hpp TestCase.hpp ETestSet.hpp EParameter.hpp Entity.hpp ParameterSet.hpp AbsAlgorithm.hpp Log.hpp JSON.hpp JSONValue.hpp
lib_LIBRARIES = libAlgatorc.a
libAlgatorc_a_SOURCES = ParameterSet.cpp TestCase.cpp EParameter.cpp ETestSet.cpp TestSetIterator.cpp Entity.cpp Timer.cpp JSON.cpp JSONValue.cpp AbsAlgorithm.cpp
algatorc_LDADD=libAlgatorc.a
So, I added -std=gnu++11 for C++11 support but I still get this error:
g++ -DPACKAGE_NAME=\"algatorc\" -DPACKAGE_TARNAME=\"algatorc\" -DPACKAGE_VERSION=\"1.0\" -DPACKAGE_STRING=\"algatorc\ 1.0\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" -DPACKAGE=\"algatorc\" -DVERSION=\"1.0\" -I. -g -O2 -MT algatorc.o -MD -MP -MF .deps/algatorc.Tpo -c -o algatorc.o algatorc.cpp
In file included from /usr/include/c++/4.8/thread:35:0,
from Log.hpp:281,
from algatorc.cpp:18:
/usr/include/c++/4.8/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support is currently experimental, and must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
#error This file requires compiler and library support for the \
^
And from this error I can see that g++ doesn't use -Wall -fPIC -std=gnu++11 but I don't see why. It's using something completely else.
This is my configure.ac script which is located in rootdir of my project
AC_PREREQ([2.69])
AC_INIT([algatorc], [0.1], [my_mail])
AC_CONFIG_SRCDIR([src/TestCase.hpp])
#AC_CONFIG_HEADERS([config.h])
LT_INIT
AM_INIT_AUTOMAKE
# Checks for programs.
AC_PROG_CXX
# Checks for libraries.
# Checks for header files.
AC_CHECK_HEADERS([stdlib.h string.h sys/time.h unistd.h wchar.h wctype.h])
# Checks for typedefs, structures, and compiler characteristics.
AC_CHECK_HEADER_STDBOOL
AC_C_INLINE
AC_TYPE_SIZE_T
# Checks for library functions.
AC_FUNC_MALLOC
AC_FUNC_MKTIME
AC_CHECK_FUNCS([gettimeofday memset mkdir])
LIBS=-ldl
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
I have also tried to add AX_CXX_COMPILE_STDCXX_11 to configure.ac script but error still occurs. Any idea how to fix this?
I am using Ubuntu 12.04 x64, and Eclipse (Version: Mars Release (4.5.0))
A quick test shows that everything is working correctly for me, with a similar configuration, so you're going to have to figure out what's going on with your Makefile simply by rolling up your sleeves, looking into your final Makefile.
Look inside the automake-d Makefile. You should find somewhere inside it, the final build rule for .cpp.o. Search for ".cpp.o". It should look something like this:
.cpp.o:
$(AM_V_CXX)$(CXXCOMPILE) -MT $# -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $# $<
$(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
After verifying this, the next step is to look at what your CXXCOMPILE macro is defined to. It should look something like this:
CXXCOMPILE = $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \
$(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS)
And that's your AM_CXXFLAGS variable being used. Finally, confirm how it's defined in the actual Makefile.
In my case, for my simple test, it was simply:
AM_CXXFLAGS = -std=gnu++11
In my case, it was just that, in your case, obviously you'll have your other flags in there.
That's it. The automake-d Makefile is obviously quite large, and looks intimidating, but when you get down to it, it's not very complicated at all.
It's going to be one of two things. Either another part of your Makefile.am clobbers the value of AM_CXXFLAGS, or the CXXCOMPILE macro is clobbered. One thing about automake, is that it generally doesn't complain if a macro or a variable is redefined. It'll simply generate the final Makefile using the final value of the variable. So, I would guess that somewhere later, in your Makefile.am, you set AM_CXXFLAGS to something else, without being aware of it here.
Note: the actual macros often get tweaked, with each successive version of automake, so yours may look slightly different, but the general idea should be the same. The .cpp.o build rule runs the CXXCOMPILE macro, which uses AM_CXXFLAGS.
First of all, automake will not generate Makefile until executing automake
So, I added -std=gnu++11 for C++11 support but I still get this error:
g++ -DPACKAGE_NAME=\"algatorc\" -DPACKAGE_TARNAME=\"algatorc\" -DPACKAGE_VERSION=\"1.0\" -DPACKAGE_STRING=\"algatorc\ 1.0\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" -DPACKAGE=\"algatorc\" -DVERSION=\"1.0\" -I. -g -O2 -MT algatorc.o -MD -MP -MF .deps/algatorc.Tpo -c -o algatorc.o algatorc.cpp
In file included from /usr/include/c++/4.8/thread:35:0,
from Log.hpp:281,
from algatorc.cpp:18:
/usr/include/c++/4.8/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support is currently experimental, and must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
#error This file requires compiler and library support for the \
^
Your log said there is no statement of -std=gnu++11 in gcc`s compile time. In other words, automake didn't generate Makefile containing statement of -std=gnu++11.
So, all you need to do is, in your project root with command line:
$ autoreconf
This will automatically execute aclocal, autoconf, automake and blah blah...
About checking the -std=c++11 flag
is good to use ax_cxx_compile_stdcxx_11.m4
To use it, you need to download it from the above link and set it in $(project_top)/m4/
Next, you write like below in configure.ac:
AX_CXX_COMPILE_STDCXX_11([noext], [mandatory])
And exec, you can check it's possible or not to use C++11 feature in the platform
$ autoreconf -vi -I m4
$ ./configure

Compiling the cal3d demo "cally" (3d model library with boned animations)

I think this is a question about automake.
http://home.gna.org/cal3d/
I'm struggling with the cally demo of Cal3D.
The first problem I ran into was that the Cal3D code base is missing #include <cstring> and #include <memory> in a lot of places.
Doing this every time I got an error in any source file in Cal3d was enough to let me compile it.
The cally demo also needed some #include <cstring>
Now my problem is that HAVE_SDL_H is not defined when tick.cpp is compiled.
The configure and makefile seems to accept that SDL is installed on my system, but the macros in src/tick.cpp doesn't.
I guess there is some kind of bug in the configure.in or something, but I don't seem to find out just what it is.
if g++ -DHAVE_CONFIG_H -I. -I. -I.. -O3 -ffast-math -funroll-all-loops -g -O2 -I/usr/include -I/usr/local/include -I/usr/include/SDL -D_GNU_SOURCE=1 -D_REENTRANT -MT tick.o -MD -MP -MF ".deps/tick.Tpo" -c -o tick.o tick.cpp; \
then mv -f ".deps/tick.Tpo" ".deps/tick.Po"; else rm -f ".deps/tick.Tpo"; exit 1; fi
tick.cpp:144:5: error: #error "no timer implemented for your plateform"
Edit:
I've finally compiled the demo.
When I compiled cal3d I added #include <cstring> to the following files:
src/cal3d/hardwaremodel.cpp
src/cal3d/platform.cpp
src/cal3d/renderer.cpp
src/cal3d/submesh.cpp
src/cal3d_converter.cpp
When I compiled cally I added #include <cstring> to the following files:
src/demo.cpp
src/model.cpp
In model.cpp I changed line 640 from
glBindTexture(GL_TEXTURE_2D, (GLuint)pCalRenderer->getMapUserData(0));
to
glBindTexture(GL_TEXTURE_2D, *(GLuint*)pCalRenderer->getMapUserData(0));
I also did some uglier changes to get src/tick.cpp to compile.
In src/tick.cpp I removed everything that had anything to do with SDL. I also removed a macro if clause checking for __i386__ or __ia64__, so that Tick::getTime() could also be compiled.
I know that this is not a proper fix, so improvements are very much welcome.
64-bit OpenSuSE with a 2.6.27 kernel.
GCC: 4.3.2
GNU Automake: 1.10.1
GNU Autoconf 2.63
64-bit versions of the SDL library is installed with zypper (through the GUI).
Solution
In configure.in change
AC_CHECK_HEADERS([SDL.h])
to
AC_CHECK_HEADERS([SDL/SDL.h])
(then run autoreconf and ./configure)
in tick.cpp change all checks for HAVE_SDL_H to HAVE_SDL_SDL_H
This is all due to a restructuring in the sdl library.
The errors you got with missing #include <cstring> and #include <memory> is mainly du to a cleanup that happened in GNU headers: inclusion of unnecessary headers were removed and consequently programs not including the proper headers for the features they use face compiling errors.
About HAVE_SDL_H, most likely, your Linux distribution is missing packages.
You probably need to install the SDL library. Some Linux distributions like Ubuntu split the packages between the library runtime and the dev files so you need to install both packages
sudo apt-get install libsdl1.2-dev
Regarding:
if g++ -DHAVE_CONFIG_H -I. -I. -I.. -O3 -ffast-math -funroll-all-loops -g -O2 -I/usr/include -I/usr/local/include -I/usr/include/SDL -D_GNU_SOURCE=1 -D_REENTRANT -MT tick.o -MD -MP -MF ".deps/tick.Tpo" -c -o tick.o tick.cpp; \
then mv -f ".deps/tick.Tpo" ".deps/tick.Po"; else rm -f ".deps/tick.Tpo"; exit 1; fi
tick.cpp:144:5: error: #error "no timer implemented for your plateform"
It indeed fails to compile because HAVE_SDL_H is not defined in config.h. When you look at configure.in you see it's using AC_CHECK_HEADERS([SDL.h])
From the autoconf manual:
— Macro: AC_CHECK_HEADERS (header-file..., [action-if-found], [action-if-not-found], [includes])
For each given system header file header-file in the blank-separated argument list that exists, define HAVE_header-file (in all capitals). If action-if-found is given, it is additional shell code to execute when one of the header files is found. You can give it a value of ‘break’ to break out of the loop on the first match. If action-if-not-found is given, it is executed when one of the header files is not found.
So, AC_CHECK_HEADERS([SDL.h]) makes configure search for SDL.h in /usr/include and doesn't find it because its (new?) path is /usr/include/SDL/SDL.h
As a workaround, use CPPFLAGS to add system header search paths when invoking configure:
./configure CPPFLAGS="-I/usr/include/SDL"
Now you may want to fix configure.in
configure.in uses AM_PATH_SDL(1.2.0) which will end up invoking sdl-config --cflags to define SDL_CFLAGS. (the implementation of AM_PATH_SDL typically lies in the /usr/share/aclocal/sdl.m4 file)
# Check for SDL
AM_PATH_SDL(1.2.0)
LDFLAGS="$LDFLAGS $SDL_LIBS"
CXXFLAGS="$CXXFLAGS $SDL_CFLAGS"
AC_CHECK_HEADERS([SDL.h])
AC_LANG_CPLUSPLUS
sdl-config --cflags returns -I/usr/include/SDL -D_GNU_SOURCE=1 -D_REENTRANT but those -I and -D directives should not end up in CFLAGS anyway but rather in CPPFLAGS (as per the autoconf manual). Hence I would say there is already something wrong "at the SDL level".
Now take a look at the AC_LANG documentation:
‘C’
Do compilation tests using CC and CPP and use extension .c for test programs. Use compilation flags: CPPFLAGS with CPP, and both CPPFLAGS and CFLAGS with CC.
‘C++’
Do compilation tests using CXX and CXXCPP and use extension .C for test programs. Use compilation flags: CPPFLAGS with CXXCPP, and both CPPFLAGS and CXXFLAGS with CXX.
Moving the AC_LANG_CPLUSPLUS up so that it at least above AC_CHECK_HEADERS([SDL.h]) should make it now use g++ and CXXFLAGS when trying to compile SDL.h which should success since SDL_CFLAGS were added to CXXFLAGS. (again, it should really be SDL_CPPFLAGS but you won't change SDL...)