All,
I'm working on a new C++ project for an embedded system. Part of the system is some legacy Python code that we'll need to interface too. I've already prototyped a C++ to Python interface using the various PyImport_ImportModule functions etc. provided by Python, and tested this on my host system (Ubuntu 64 bit 17.04).
However, the build system in the new project also tries to build all dependencies, so it builds Python 2.7.13 from source. The problem I am seeing is the interface code that used to work with the host system Python is not working with the newly built from source Python. The error I am seeing is "time.so: undefined symbol: PyExc_ValueError", and the .py file I'm trying to call from C++ does import time as one of the first few lines. I checked and time.so is present in the custom built Python and I did update LD_LIBRARY_PATH to include it, but this didn't help. At the end of the build for Python I do see these warnings, so perhaps one of them is relevant?
Python build finished, but the necessary bits to build these modules were not found:
_bsddb _sqlite3 _ssl
_tkinter bsddb185 bz2
dbm dl gdbm
imageop readline sunaudiodev
zlib
Can anyone suggest what to try next? We are not enabling any special options or using any non standard flags in the Python we're building from source (perhaps some extra settings are required)?
This is usually happening to either:
clean build required or
wrong libpython lib being linked. I would suggest to start with trying clean build and then double check your linking flags (make sure you build for Python-2.7 and link to Python-2.7 and not to say Python-3.* etc).
Also, please see this discussion, it looks like a very similar issue: https://www.panda3d.org/forums/viewtopic.php?t=13222
Edit: this also might be relevant: undefined symbol: PyExc_ImportError when embedding Python in C
Related
I'm trying to get chrome V8 embedded in my C++ project, and I can only get what I could call, my project being embedded in V8. My only concern with this is that my program is cross-platform and I would like build commands to be the same. I started development it on Windows, but I'm using a mac now to get V8 running.
I can get V8 built and their samples running using this setup:
Get this: https://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools_tutorial.html#_setting_up
get source: https://v8.dev/docs/source-code
build: https://v8.dev/docs/build
My current solution has a few commands install, build, run. The build command is more complicated as it attempts to automatically edit the BUILD.gn file in V8 to insert your project instead of V8. It will add all files in your source directory to the sources.
This approach feels very wrong for a few reasons. The first being that there is almost definitely a better way to configure my project than editing a build script with a python script. Secondly, I would like V8 to be embedded in my project, not the other way around. I only have SDL2 as a dependency but I have cross platform CMake builds setup, which would be abandoned for however V8 builds the source files. I feel this way could get hard to manage if I add more dependencies.
I'm currently working with a small test project with one source file.
EDIT: I can't find anything on embedding V8 between running a sample and API usage
The usual approach is to have a step in your build system that builds the V8 library as a dependency (as well as any other dependencies you might have). For that, it should use the official V8 build instructions. If you have a split between steps to get sources/dependencies and compiling them, then getting depot_tools and calling fetch_v8/gclient sync belongs in there. Note that you probably want to pin a version (latest stable branch) rather than using tip-of-tree. So, in pseudocode, you'd have something like:
step get_dependencies:
download/update depot_tools
download/update V8 # pinned_revision (using depot_tools)
step compile (depends on "get_dependencies"):
cd v8; gn args out/...; ninja -C out/...;
cd sdl; build sdl
build your own code, linking against V8/sdl/other deps.
Many build systems already have convenient ways to do these things. I don't know CMake very well though, so I can't suggest anything specific there.
I agree that using scripts to automatically modify BUILD.gn feels wrong. It'll probably also turn out to be brittle and high-maintenance over time.
I got V8 building with CMake very easily using brew:
brew install v8
then add the following lines to CMakeLists.txt
file(GLOB_RECURSE V8_LIB # just GLOB is probably fine
"/usr/local/opt/v8/lib/*.dylib"
)
include_directories(
YOUR_INCLUDES
/usr/local/opt/v8
/usr/local/opt/v8/include
)
target_link_libraries(YOUR_PROJECT LINK_PUBLIC YOUR_LIBS ${V8_LIB})
Worked on Mojave 10.14.1
I have a problem. I write a python script to make my work faster and now I want to share it with my team.
I don't want them to mess with some imports that are missing in the basic python installation. I know there is a way to compile python to exe, but I wonder if I can compile the code and the imports without messing with py2exe.
Does python have a built-in solution for that?
I saw that python have pyc compile option. Does it compile the import modules as well?
Thanks,
Or
No I don't believe you have a built-in standalone compilation mode native to Python. The pyc is a compiled code but not the kind you usually distribute as an executable program (meaning you would still need the Python interpreter).
If you don't want to use py2exe or other similar packages I advise you to use a portable version of Python with which you can distribute your software (see for example WinPython). The easiest way to accomplish this is by giving the portable distribution together with your code and perhaps a batch file (or similar, if you want to have a .exe alike behavior).
NOTE: You can provide the pyc compile code of the libraries you are using and putting them on the root of you software (or just stating where those imports should happen) but I predict this will give you problems in the future due to dependencies between different libraries. So, it's possible although I would hardly considerate it a good solution for what it seems to me you are trying to achieve.
The running environment is ubuntu 12.04. Most of the time my python script have to import some external libraries or modules before run. When I distribute the script to some other linux machines. I have to install some necessary modules and libraries again.
Is there some way to package all necessary modules into one single python file and running without installing any module? Thanks
Just combine your files to one file. But it bad way. Select from better solutions:
create deb-package with all depends. In next times system will automatically install all libraries, will check correct state and will remove your files.
using rsync
get actually version from your version control system.
I have wrote script for generating deb-package after commit to our version control system.
I'm trying to install opencv on my machine as explained in the book:
"Packtpub OpenCV Computer Vision with Python Apr 2013"
It says that in order to run kinect you need to compile openCV with some stuff in it, so I downloaded openCV .exe that extracts to a 3.2gb folder and proceeded with all the steps...
Used CMaker, used the compiler MinGW, and everything as the book said
Than it tells me to try running some examples... but when I try to run drawing.py as recommended by the book, and all the others, it says:
python drawing.py
OpenCV Python version of drawing
traceback< most recent call last>:
File "drawing.py", line 7, in
import cv2.cv as cv
ImportError: DLL load failed: Invalid access to memory location.
I saw a lot of people saying this problem is fixed by adding the path to the bin of openCV dlls to path...
how do I find out which dll name is missing so I can find the name of it and find the folder where it is?
I have a x64 computer but the book tells me to install everything x86 because it is harder to get some minor bugs, maybe a version incompatibility between openCV, compiler, cmaker, and python?
I've tried to add a lot of folders to "path" variable and it didn't work
please tell me how I find out which dlls are missing so I can search for them on the computer or some other way to solve this problem because I'm just out of ideas
I don't have a high enough rep to add a comment otherwise I would but something you can do is start python with the -v option.
doing that will add a bit more to the output console and it will cause the python VM to output where it is looking for things when it tries looking for things, especially when failures occur. I've found that to be helpful when trying to hunt problems such as path problems down.
It also sounds like you haven't got your paths setup correctly. Have you looked at ImportError: DLL load failed: %1 is not a valid Win32 application ? If a DLL was expected in a certain location but wasn't loaded or present but then 'called' via a LoadLibrary (without checking to see if it was actually loaded) that might cause such an error. It is probably the fault of the original DLL that failed to verify the subsequent DLL was loaded instead of just assuming the LoadLibrary call succeeded.
In addition to the python -v yourmodule.py
option you could also try running an strace (if you are on unix -- but it doesn't sound like you are). I used to use SoftICE on Windows for digging down deep. If you know the package or the DLL that is at the root of the problem, and have access to a dll export tool, you should be able to get a list of the dependencies the dll needs (external functions it relies on). Then you just need to know or find those functions it relies on from other DLLs. It's been awhile since I used to had do this sort of stuff all the time to locate functions in other DLLs but it is something that is entirely doable from a spelunkers perspective. But there are probably easier ways to go about it.
I'd start with the python -v approach first.
The DLLs you need are almost certainly the ones kept in opencv/build/x64/vc11/bin (this path will be different, but equivalent, based on whatever compiler you used). That's the only folder that needs to be added to your system path.
Make sure that if you have a 32-bit version of Python, you compile OpenCV with a 32-bit compiler. Open up Python and it tells you its architecture.
Also, try installing numpy+mkll instead of numpy from the link of binary packages binary package for numpy+mkll. I had the same error and this solution solved the problem for me.
if you have installed simple numpy, don't worry, open cmd in the directory where you downloaded the new package. use this:
pip install name_of_the_whl_file
or
pip3 install name_of_the_whl_file
it will automatically uninstall the old numpy and install numpy+mkll.
Also, always remmember to add import numpy statement in your code before import cv2 statement.
import numpy
import cv2
Hope it helps.
I have embedded a Python 2.7.2 interpreter into a C++ application using the Python C API.
On the target machines, I can't guarantee a Python install, so I am trying to get the embedded interpreter to look at the folder where my application resides. So in the application diectory, I have the Lib, Libs and DLLs folder for Python.
In the code, I have used Py_SetPythonHome() an Py_SetProgramName() to get Python loaded and also to allow standard libraries to be installed.
One of the test scripts I'm using has:
import csv
import numpy
The csv line is now fine. Within the \libs directory I can see site-packages\numpy. But the import crashes on this line. I am using numpy 1.6.1 for this.
I think I might need to change the module search path - is this right and what is the best way to achieve this to allow third-party libraries like numpy to be accessible to my scripts? You can assume that I could produce an absolute path to the numpy directory if that would help.
EDIT: More information - I've managed to produce the traceback and the error I'm getting is in \numpy\core\_init_.py when it tries the line "import multiarray" with the error "ImportError: DLL load failed: The specified module connot be found". Checking the directory, I find a multiarray.pyd. Any thoughts?
I have exactly the same problem with you, when I use python C API to import numpy. Some .pyd modules cannot be imported. When I changed to boost.python, there is no problem. Maybe you can try boost.python also. Here is sample:
This turned out to be a DLL mismatch error. The numpy version that the code was looking had a slightly different compilation route to that of my C++ code that was embedding the interpreter.
The resolution was to recompile numpy against the Python distribution that I'd used in my application, but using exactly the same compiler settings. This cleared the problem.