Here's some sample test code I'm trying to run on an embedded Linux system:
#include <iostream>
int main(int argc, char *argv[])
{
char c = 'A';
int i = 7;
std::cout << "Hello World from C++" << std::endl;
std::cout << "c=" << c << std::endl;
std::cout << "i=" << i << std::endl;
}
The embedded system is a Microblaze, which is a 32-bit RISC softcore processor running on a Xilinx FPGA. Please don't be put off by that, as a lot of your standard Linux knowledge will still apply. The processor is configured as LSB with an MMU, and the Linux build I'm using (PetaLinux, supplied by Xilinx) is expecting the same. I'm using the supplied GNU compiler; Microblaze appears to be officially supported in GCC.
The problem I'm having is that when the stdlib needs to interact with the integer, it segfaults. Here's the output:
Hello World from C++
c=A
Segmentation fault
Note that the char is handled fine. The C equivalent of this code also works fine:
#include <stdio.h>
int main(int argc, char *argv[])
{
char c = 'A';
int i = 7;
printf("Hello World from C\n");
printf("c=%c\n", c);
printf("i=%i\n", i);
return 0;
}
...
Hello World from C
c=A
i=7
This leads me to suspect an issue with the shared library libstdc++.so.6.0.20. That library is supplied by Xilinx though, so it should be correct. The file output of that library is:
libstdc++.so.6.0.20: ELF 32-bit LSB shared object, Xilinx MicroBlaze 32-bit RISC, version 1 (SYSV), dynamically linked, not stripped
The file output of my binary is:
cpptest: ELF 32-bit LSB executable, Xilinx MicroBlaze 32-bit RISC, version 1 (SYSV), dynamically linked, interpreter /lib/ld.so.1, for GNU/Linux 2.6.32, stripped
I've also tried statically linking my binary using the -static flag, but the result was the same.
Here are the most relevant compiler and linker settings I'm using, but I've tried changing these with no avail.
CC=microblazeel-xilinx-linux-gnu-gcc
CXX=microblazeel-xilinx-linux-gnu-g++
CFLAGS= -O2 -fmessage-length=0 -fno-common -fno-builtin -Wall -feliminate-unused-debug-types
CPPFLAGS?=
CXXFLAGS= -O2 -fmessage-length=0 -fno-common -fno-builtin -Wall -feliminate-unused-debug-types
LDFLAGS=-Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed
To compile:
#$(CCACHE) $(CXX) $(CPPFLAGS) $(CXXFLAGS) $(CFLAGS) $< -o "$#"
To link:
#$(CXX) $(RELOBJECTS) $(LDFLAGS) $(EXT_LIBS) -o $(RELBINARY)
Note that microblazeel refers to the little endian version of the microblaze compiler.
I would very much like to debug this or at least look at a coredump, but no coredump seems to be produced when the segfault happens, and there's no gdb executable in the Microblaze Linux build. Maybe I'm missing something?
Thanks for taking your time to read this. Any thoughts?
From what I could see after a bit of research, vivado is the HW development IDE [because they offer a trial period--so it's the HW devel, which they always want to charge for].
If you're using the standard SDK board from Xilinx, everything should be preconfigured. Otherwise, a HW designer produces a HW design that has Microblaze in it.
From that, you may have to use petalinux to generate a new boot, kernel, etc. image that is compatible.
You may need to rebuild libstdc++ from source, but I'd do that as a last resort. For example, don't bother with it, until you've got gdb working and have test results.
Here are some petalinux PDF files:
http://www.xilinx.com/support/documentation/sw_manuals/petalinux2013_10/ug977-petalinux-getting-started.pdf
http://www.xilinx.com/support/documentation/sw_manuals/petalinux2013_10/ug981-petalinux-application-development-debug.pdf
The development guide shows how to invoke gdb (e.g.):
On the target system:
gdbserver host:1534 /bin/myapp
On the development system:
petalinux-utils --gdb myapp followed by target remote 192.168.0.10:1534
I've done some editing on your Makefile with annotations. I've commented out some of the non-essential options. Note that I'm using the += operator to build up CFLAGS/CXXFLAGS gradually
The basic idea here is to do a build with minimum deviations from "standard". Add only proven essential options. Build and test. Add back the options one-by-one [rebuild and test each time] until you find the option that is causing the problem.
I do, however, have a strong suspicion about -fno-common being a source of problems. Also, to a lesser extent, I'm a bit suspicious of -Wl,--as-needed
Should these options work? Sure, but xilinx/microblaze ain't no x86 ...
I've added two command line make variables:
DEBUG -- generate for debug with gdb
VERBOSE -- show everything about the build process
For example, try make <whatever> DEBUG=1 VERBOSE=1
CC = microblazeel-xilinx-linux-gnu-gcc
CXX = microblazeel-xilinx-linux-gnu-g++
CPPFLAGS ?=
CMFLAGS += -Wall -Werror
CMFLAGS += -fmessage-length=0
# compile for gdb session
# NOTES:
# (1) -gdwarf-2 may or may not be the the right option for microblaze
# (2) based on doc for -feliminate-unused-debug* petalinux/microblaze may want
# stabs format
ifdef DEBUG
CMFLAGS += -gdwarf-2
CMFLAGS += -O0
# compile for normal build
#else
CMFLAGS += -O2
CMFLAGS += -feliminate-unused-debug-types
endif
# NOTE: I used to use "#" on commands, but now I leave it off -- debug or not
# sure it's "ugly" but you can get used to it pretty quickly--YMMV
ifndef VERBOSE
Q :=
else
###Q := #
Q :=
endif
# let compiler/linker tell you _everything_:
# (1) configure options when tool was built
# (2) library search paths
# (3) linker scripts being used
ifdef VERBOSE
CMFLAGS += -v
LDFLAGS += -Wl,--verbose=2
endif
CMFLAGS += -fno-builtin
# NOTE: I'd _really_ leave this off as it may confuse c++ std as "<<" calls
# _M_insert (which is in the library, which is almost certainly _not_ using
# -fno-common)
###CMFLAGS += -fno-common
# NOTE: I'm also suspicious of this a little bit because the c++ lib may have
# some "weak" symbols that the c library doesn't
###LDFLAGS += -Wl,--as-needed
# NOTE: this seems harmless enough, but you can comment it out to see if it
# helps
LDFLAGS += -Wl,--hash-style=gnu
# NOTE: an optimization only
ifndef DEBUG
LDFLAGS += -Wl,-O1
endif
CFLAGS += $(CMFLAGS)
CXXFLAGS += $(CMFLAGS)
# NOTES:
# (1) leave this off for now -- doesn't save _that_ much and adds complexity
# to the build
# (2) IMO, I _never_ use it and I erase/uninstall it on any system I
# administrate (or just ensure the build doesn't use it by removing it
# from $PATH)--YMMV
###XCCACHE = $(CCACHE)
# to compile
$(Q)$(XCCACHE) $(CXX) $(CPPFLAGS) $(CXXFLAGS) $(CFLAGS) $< -o "$#"
# to link
$(Q)$(CXX) $(RELOBJECTS) $(LDFLAGS) $(EXT_LIBS) -o $(RELBINARY)
Related
I would like to compile and run my program in two different environments. The libraries in both environments are installed on slightly different places, resulting in different makefile-lines:
In makefile A:
CXXFLAGS=-I$(DIR) -flto -fopenmp -O3 -g -march=native -std=gnu++17 -c -I/opt/interp2d/include -std=c++17 -I/opt/splinter/include -I/usr/include/eigen3
In makefile B:
CXXFLAGS=-I$(DIR) -nostindc++ -I~/local_opt/eigen/include/eigen3/ -I~/local_opt/boost/include -I~/local_opt/armadillo/include -flto -fopenmp -O3 -g -march=native -std=gnu++17 -c -I~/local_opt/interp2d/include -std=c++17 -I~/local_opt/splinterp/include -I/usr/include/eigen3
My problem now is that I am developing the program on the first machine, using makefile A, but also deploying it on the second machine. The deployment is done using git.
Every time I do a git pull on the second machine, I have to fix all the paths in the makefile in order to compile the program properly. Nevertheless I still would like to include the makefile in the git repository in order to keep both makefiles at the same level regarding compiling flags and linked libraries.
Thus, is there an easier way to still sync the makefile via git, while using different paths for the libraries and includes?
I think you could solve your problem by conditionally setting the variable CXXFLAGS in a common file (e.g.: config.mk) and by including that file in your makefiles.
The value used for setting the CXXFLAGS variable could, for example, depend on the value of the environment variable HOST:
ifeq ($(HOST),A)
CXXFLAGS = ... # for machine A
else # B
CXXFLAGS = ... # for machine B
endif
Then, include this config.mk makefile in both makefileA and makefileB:
include config.mk
I like this answer, however, I thought I'd mention this for completeness: If you have a lot of different hosts you can do something to the effect of:
include HostConfig_$(HOST).mk
And then create HostConfig_A.mk and HostConfig_B.mk which set host specific flags (Be it directories, etc). This is useful if you are managing a large project with lots of different host-specific variables.
As well, (for smaller projects), you could do something to the effect of:
CXX_INCLUDES_A = ...
CXX_INCLUDES_B = ...
CXX_FLAGS := -I$(DIR) -flto -fopenmp -O3 -g -march=native -std=gnu++17
CXX_FLAGS += $(CXX_INCLUDES_$(HOST))
The traditional answer to this problem is a configure script (see automake, autoconf for widely used framework). After checking out the source you run ./configure --with-eigen=~/local_opt/eigen/include/eigen3/ and it will adjust your Makefiles accordingly (usually generates Makefile from Makefile.in and only Makefile.in is in git).
Note: Properly done you only need to run configure on the first checkout, not on updates. make can generate Makefile again automatially as needed.
I am learning linux, and my first step is to adapt my project for running on linux. Here is simple makefile (in educational purposes mostly), which generates out file:
#------------------------BUILD VARIABLES-----------------------------
# Directories, containing headers
INCLUDE_DIR = ../Include/
# Output directory which will contain output compiled file
OUTPUT_DIR = ../Bin/Debug/
SOURCES = EngineManager.cpp Geometry.cpp Main.cpp Model.cpp \
Shaders.cpp TGAImage.cpp
HEADERS = EngineManager.h Geometry.h Line.h Model.h Shaders.h \
TGAImage.h Triangle.h
#------------------------BUILD_RULES---------------------------------
TinyRenderBuilding : $(addprefix $(INCLUDE_DIR), $(HEADERS)) $(SOURCES)
mkdir -p $(OUTPUT_DIR)
g++ -std=c++14 -o $(OUTPUT_DIR)TinyRender.out -g -I$(INCLUDE_DIR) $(SOURCES)
I cannot understand, why does g++ not generate debug symbols? -g option is presented
To include debug symbols when compiling with g++ you need to pass the -g option.
In a make make file this usually means adding it to to CXXFLAGS.
Also make sure you pass the -g option when you create the executable: when you compile you turn .cpp files into .o files, when you do the linking you turn those .o files into your executable).
If you change the options before running make again be sure to run a make clean cause otherwise it won't get recompiled.
Finally, make sure that you do not have additional steps like strips command run on the executable (which would remove debugging symbols).
you can use
objdump --syms <executable-file>
to check if an executable have symbols.
when it doesn't have symbols it will say something like:
SYMBOL TABLE:
no symbols
(I'm no experto of C / C++ programming, I just run into this while I was trying to debug someone else code)
According to your makefile g++ should produce debug symbols (-g option is presented). To confirm this you can run file on resulting binary:
$ file a.out
a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=9fe588c18099ef418daf288931bb033cc287922e, with debug_info, not stripped
(Note with debug_info string in output)
I'm not entirely sure, but you can try -g or -ggdb.You can do some research on these. We were using these parameters to debug the C program with the gdb tool.
The problem:
2 version of g++ installed on a computer running Ubuntu 12.04. They are the versions 4.6 and 5.2.
I have to compile a C++11 program using a Makefile. If I use g++ as compiler it calls automatically the version 4.6, it does not support c++11 so the compilation fails. I've followed a tutorial online, so that now if I call g++ it calls automatically the version 5.2 and now it works.
I find this solution not so good, since it works only on my PC. Is there a way to recognize in the Makefile if the default g++ version support C++11 and, in case not, switch to a more recent version?
Thank you!
Is there a way to recognize in the Makefile if the default g++ version support C++11 and, in case not, switch to a more recent version?
You can certainly detect the version of the default compiler available in PATH in your makefile. However, where do you search for another version?
The standard approach is to let the user specify the C compiler through CC and C++ compiler through CXX make variables, e.g.: make CC=/path/to/my/gcc CXX=/path/to/my/g++.
You can always select which gcc to use while invoking make
make CXX=/gcc/path/of/your/choice
otherwise you can detect gcc version using
ifdef CXX
GCC_VERSION = $(shell $(CXX) -dumpversion)
else
GCC_VERSION = $(shell g++ -dumpversion)
endif
in Makefile and while using, you can test if your gcc is >=4.6
ifeq ($(shell expr $(GCC_VERSION) '>=' 4.6), 1)
UPDATE: newer gcc needs -dumpfullversion together (icx is the CC from Intel OneAPI)
$ icx -dumpversion
14.0.0
$ gcc -dumpversion
9
$ icx -dumpfullversion -dumpversion
14.0.0
$ gcc -dumpfullversion -dumpversion
9.3.1
One very simple way is to use conditional statements in your makefile, and go for versions which you know are compatible, and only use the default gcc as a fallback. Here's a basic example:
CXX=g++
ifeq (/usr/bin/g++-4.9,$(wildcard /usr/bin/g++-4.9*))
CXX=g++-4.9
# else if... (a list of known-to-be-ok versions)
endif
The other, more robust method, is to generate your makefile using a script that checks for capabilities using test compilations, kind of like what ./configure usually does. I really don't mean to recommend autotools, though.
The thing to do is build your Makefile to use as many implicit rules as possible. By default compilation uses various environment variables.
The variable $(CXX) is the C++ compiler command and defaults to g++ on Linux systems. So clanging CXX to a different compiler executable will change the compiler for all implicit compile commands.
When you write explicit rules use the same variable that the implicit rules use. So instead of this:
program: program.cpp
g++ -o program program.cpp
Do this:
program: program.cpp
$(CXX) -o program program.cpp
Other variables you should use are:
CPPFLAGS = -Iinclude
CXXFLAGS = -std=c++14 -g3 -O0
Those are for pre-processing flags CPPFLAGS and compiler flags CXXFLAGS and library linking flags LDLIBS.
Using the default environment variables allows the person compiling the project the freedom to control the compilation for their desired environment.
See the GNU make manual
This works for me:
cmake_minimum_required(VERSION 2.8.11)
project(test)
if (${CMAKE_CXX_COMPILER_VERSION} LESS 5.0)
message(FATAL_ERROR "You need a version of gcc > 5.0")
endif (${CMAKE_CXX_COMPILER_VERSION} LESS 5.0)
add_executable(test test.cpp)
You can check in your source code the gcc version and abort compilation if you don't like it. Here is how it works:
/* Test for GCC > 4.6 */
#if !(__GNUC__ > 3 && __GNUC_MINOR__ > 6)
#error gcc above version 4.6 required!
#endif
I cannot find an option for the ARM GNU toolchain to compile multiple c files at the same time. I use make -j5 all the time when compiling using gcc. Helps speed up compile time dramatically. Be nice if ARM GNU had a similar option.
Here is my setup:
--Fedora 20
--Core i5
--Eclipse with ARM GNU plugin
--ARM GNU 4.8-2014-q1-update (from here: https://launchpad.net/gcc-arm-embedded)
--Target uP: STM32F205RB
I've tried to get CodeSourcery GCC working, unsuccessfully. ARM GNU seemed to work well after little setup. CodeSourcery GCC should have a -j option, as we cross compile all the time for embedded linux.
GCC is not multi-threaded. The -j<n> switch is specific to make build system, not the compiler. It tells make how many tasks it can run in parallel.
If you run make -j4 you can observe in your task manager/top/process list that it tries to run 4 instances of GCC compiling 4 independent *.c files at the same time.
To make use of -j command you must have a Makefile in your project that can benefit from it. It should have multiple independent targets, so that they can be launched in parallel.
If you are lost in the terminology, I advice you to look at make tutorial, such as this one:
http://mrbook.org/tutorials/make/
The usual strategy here is to have a separate target for every c or cpp file in our project. That way make can easily spawn multiple compiler processes for each compilation unit. Once all *.o files are generated, they are linked.
Let's see at this example snippet:
SRCS := main.c func.c other.c another_file.c ...
OBJS := $(SRCS:.c=.o)
objects: $(OBJS)
%.o: %.c
gcc -o $(#) -c $(<)
We pass a list of c files, change them to corresponding o file using suffix substitution and treat the list of *.o files as targets. Now the make can compile each c file in parallel.
In contrast, if we do something like this:
SRCS := main.c func.c other.c another_file.c ...
all:
gcc $(SRCS) -o a.out
...we won't benefit from -j switch at all, because there is only one target.
Summary
I have looked over the code the SpiderMonkey 'shell' application uses to create the ctypes JavaScript object, but I'm a less-than novice C programmer. Due to the varying levels of insanity emitted by modern build systems, I can't seem to track down the code or command that actually links a program with the desired functionality.
method.madness
This js-ctypes implementation by The Mozilla Devs is an awesome addition. Since its conception, scripting has been primarily used to exert control over more rigorous and robust applications. The advent of js-ctypes to the SpiderMonkey project, enables JavaScript to stand up and be counted as a full fledged object oriented rapid application development language flying high above 'the bar' set by various venerable application development languages such as Microsoft's VB6.
Shall we begin?
I built SpiderMonkey with this config: ./configure --enable-ctypes --with-system-nspr
followed by successful execution of: make && make install
The js shell works fine and a global ctypes javascript object was verified operational in that shell.
Working with code taken from the first source listing at How to embed the JavaScript Engine -MDN, I made an attempt to instantiate the JavaScript ctypes object by inserting the following code at line 66:
/* Populate the global object with the ctypes object. */
if (!JS_InitCTypesClass(cx, global))
return NULL;
/*
I compiled with: g++ $(./js-config --cflags --libs) hello.cpp -o hello
It compiles with a few warnings:
hello.cpp: In function ‘int main(int, const char**)’:
hello.cpp:69:16: warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null]
hello.cpp:80:20: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
hello.cpp:89:17: warning: NULL used in arithmetic [-Wpointer-arith]
But when you run the application:
./hello: symbol lookup error: ./hello: undefined symbol: JS_InitCTypesClass
Moreover
JS_InitCTypesClass is declared extern in 'dist/include/jsapi.h', but the function resides in 'ctypes/CTypes.cpp' which includes its own header 'CTypes.h' and is compiled at some point by some command during 'make' to yeild './CTypes.o'. As I stated earlier, I am less than a novice with the C code, and I really have no idea what to do here.
Please give or give direction to a generic example of making the js-ctypes object functional in an embedding.
The Hack
It had already occurred to me that linkage was failing because of conditional defines in the header files as well as scattered lib and header locations. Well enough... I tried to define the JS_HAS_CTYPES on the command line but if it worked at all it certainly was not enough.
I decided that since the SpiderMonkey shell has its own unique makefile, and already has working access to the functionality I am trying to capture, simply renaming js.cpp to js.cpp.tmp and allowing my code to stand in its place, almost worked.
The file compiled well and no runtime linking errors were thrown on application execution, but the code ('JSNativeObject' ctypes) almost completely failed JS_InitCTypesClass. Seeing that my linking error had long been forgotten, I immediately went looking through the output of make to see if I could 'swipe' the compilation code and... We Have a BINGO!
The Compilation
After restoring the shell/js.cpp to its original target, I moved the hello.cpp to the root source directory of spidermonkey and began to correct the relative paths that were created by the makefile as well as performing removal of constructs that obviously bore no presence or relavence to my application.
While the following commands appear to render an operational binary, the author can give no affinity as to the correctness or completeness of this listing.
c++ -o hello.o -c -Idist/system_wrappers_js -include config/gcc_hidden.h \
-DEXPORT_JS_API -DOSTYPE=\"Linux3.2\" -DOSARCH=Linux -I. -Idist/include \
-Idist/include/nsprpub -I/usr/include/nspr -fPIC -fno-rtti \
-fno-exceptions -Wall -Wpointer-arith -Woverloaded-virtual -Wsynth \
-Wno-ctor-dtor-privacy -Wno-non-virtual-dtor -Wcast-align \
-Wno-invalid-offsetof -Wno-variadic-macros -Werror=return-type -pedantic \
-Wno-long-long -fno-strict-aliasing -pthread -pipe -DNDEBUG -DTRIMMED -Os \
-freorder-blocks -fomit-frame-pointer -DJS_HAS_CTYPES -DMOZILLA_CLIENT \
-include js-confdefs.h -MD -MF .deps/hello.pp hello.cpp;
c++ -o hello -fno-rtti -fno-exceptions -Wall -Wpointer-arith \
-Woverloaded-virtual -Wsynth -Wno-ctor-dtor-privacy \
-Wno-non-virtual-dtor -Wcast-align -Wno-invalid-offsetof \
-Wno-variadic-macros -Werror=return-type -pedantic \
-Wno-long-long -fno-strict-aliasing -pthread -pipe -DNDEBUG \
-DTRIMMED -Os -freorder-blocks -fomit-frame-pointer hello.o \
-lpthread -Wl,-rpath-link,/bin -Wl,-rpath-link,/usr/local/lib \
-Ldist/bin -Ldist/lib -L/usr/lib -lplds4 -lplc4 -lnspr4 \
-lpthread -ldl editline/libeditline.a libjs_static.a -ldl;
The two commands listed above were placed into an executable shell script named 'mkhello' which was saved to the root source directory.
From what I can gather it is a two stage compilation method. For what reason I am unsure but the explanation seems very educational. Thoughts?
EDIT see comment below for explanation on 'two stage compilation method'.
The Code: hello.cpp
/*
* This define is for Windows only, it is a work-around for bug 661663.
*/
#ifdef _MSC_VER
# define XP_WIN
#endif
/* Include the JSAPI header file to get access to SpiderMonkey. */
#include "jsapi.h"
/* The class of the global object. */
static JSClass global_class = {
"global", JSCLASS_GLOBAL_FLAGS,
JS_PropertyStub, JS_PropertyStub, JS_PropertyStub, JS_StrictPropertyStub,
JS_EnumerateStub, JS_ResolveStub, JS_ConvertStub, JS_FinalizeStub,
JSCLASS_NO_OPTIONAL_MEMBERS
};
/* The error reporter callback. */
void reportError(JSContext *cx, const char *message, JSErrorReport *report)
{
fprintf(stderr, "%s:%u:%s\n",
report->filename ? report->filename : "<no filename=\"filename\">",
(unsigned int) report->lineno,
message);
}
int main(int argc, const char *argv[])
{
/* JSAPI variables. */
JSRuntime *rt;
JSContext *cx;
JSObject *global;
/* Create a JS runtime. You always need at least one runtime per process. */
rt = JS_NewRuntime(8 * 1024 * 1024);
if (rt == NULL)
return 1;
/*
* Create a context. You always need a context per thread.
* Note that this program is not multi-threaded.
*/
cx = JS_NewContext(rt, 8192);
if (cx == NULL)
return 1;
JS_SetOptions(cx, JSOPTION_VAROBJFIX | JSOPTION_JIT | JSOPTION_METHODJIT);
JS_SetVersion(cx, JSVERSION_LATEST);
JS_SetErrorReporter(cx, reportError);
/*
* Create the global object in a new compartment.
* You always need a global object per context.
*/
global = JS_NewCompartmentAndGlobalObject(cx, &global_class, NULL);
if (global == NULL)
return 1;
/*
* Populate the global object with the standard JavaScript
* function and object classes, such as Object, Array, Date.
*/
if (!JS_InitStandardClasses(cx, global))
return 1;
/* Populate the global object with the ctypes object. */
if (!JS_InitCTypesClass(cx, global))
return NULL;
/*
/* Your application code here. This may include JSAPI calls
* to create your own custom JavaScript objects and to run scripts.
*
* The following example code creates a literal JavaScript script,
* evaluates it, and prints the result to stdout.
*
* Errors are conventionally saved in a JSBool variable named ok.
*/
char *script = "ctypes.open";
jsval rval;
JSString *str;
JSBool ok;
const char *filename = "noname";
uintN lineno = 0;
ok = JS_EvaluateScript(cx, global, script, strlen(script),
filename, lineno, &rval);
if (rval == NULL | rval == JS_FALSE)
return 1;
str = JS_ValueToString(cx, rval);
printf("%s\n", JS_EncodeString(cx, str));
/* End of your application code */
/* Clean things up and shut down SpiderMonkey. */
JS_DestroyContext(cx);
JS_DestroyRuntime(rt);
JS_ShutDown();
return 0;
}
Conclusion
$ ./mkhello
# ...
# error free garbage scrolls....
$ ./hello
function open() {
[native code]
}
Follow this example to provide js-ctypes to a SpiderMonkey embedding. You may or may not need to re-create these steps in order but it is highly recommended from my current perspective.
The problem: Most platform distributors are ignorant to recommendations given by code developers. Therefore js-ctypes are not enabled for mozjs185 on most if not all systems hosting the library. This leaves some issues for you to sort out.
After you have configured with --enable-ctypes and --with-sytem-nspr followed by make and then make install (might need to be root for last command)
You will likely have two versions of libmozjs185.so on your system. One with ctypes enabled (located in /usr/local/lib) and one without ctypes enabled (located in /usr/lib).
You, as per the question, want to link against the library with ctypes. So that's what you do by specifying: -L/usr/local/lib -lnspr4 -lmozjs185 to your compiler. It will compile ok. But when the application runs, the os library loader will load the first instance of the library that it finds. Unfortunately this will likely be the library located in /usr/lib and this version likely does not have ctypes enabled. That's where you'll run into this [solved] problem: g++ Undefined Symbol Error using shared library
The bottom line is this: multiple versions of the same library is creating one hell of an issue. The best way to provide js-ctypes to a spidermonkey embedding is therefore linking in the ctypes enabled static version of mozjs185 to your program unless you want to write a library for dealing with multiple platform libary loaders, or create your own (rebranded version of mozjs185) which I have covered in great detail over # SpiderMonkey Build Documentation - MDN
To perform the static linking you will need to use these parameters with g++:
-lnspr4 -lpthread -lmozjs185-1.0 provided that you have built and installed the development package correctly. This is the 'best' (platform independant) way to provide js-ctypes to a spidermonkey embedding. Although this does increase the size of your application by at least 3.5 MB. If you have built the debug version it could be more than 15 times larger.