I have a problem. I write a python script to make my work faster and now I want to share it with my team.
I don't want them to mess with some imports that are missing in the basic python installation. I know there is a way to compile python to exe, but I wonder if I can compile the code and the imports without messing with py2exe.
Does python have a built-in solution for that?
I saw that python have pyc compile option. Does it compile the import modules as well?
Thanks,
Or
No I don't believe you have a built-in standalone compilation mode native to Python. The pyc is a compiled code but not the kind you usually distribute as an executable program (meaning you would still need the Python interpreter).
If you don't want to use py2exe or other similar packages I advise you to use a portable version of Python with which you can distribute your software (see for example WinPython). The easiest way to accomplish this is by giving the portable distribution together with your code and perhaps a batch file (or similar, if you want to have a .exe alike behavior).
NOTE: You can provide the pyc compile code of the libraries you are using and putting them on the root of you software (or just stating where those imports should happen) but I predict this will give you problems in the future due to dependencies between different libraries. So, it's possible although I would hardly considerate it a good solution for what it seems to me you are trying to achieve.
Related
One of many nice features of java is that if I type javac x.java, it will compile the classes in x.java and any other classes mentioned in x, and recursively look for other required classes. I can then find the .class files, put them in a jar and I have a minimal executable for x. How would I do the same for c++? I expect I need to do it with cmake, but "minimal" does not seem to be in the modern vocabulary.
I am trying to get opencv4 running on a raspberry pi - lots of guides on the web; primarily targeting python and the rest don't work in my experience. OpenCV is classic bloatware and the solution is to automate the build process rather than simplify it.
I feel I ought to be able to start with a relevant example application and run, for example:
g++ facedetect.cpp
then (manually) compile the missing bits.
There are however missing .hpp files that are constructed by the cmake/make process and the only option seems to be to build the entire edifice first.
OpenCV4 is a CMake based project. No need to combine the classes and source files etc, that is what CMake is doing for you! You can just use this guide which has every step written out for you.
I know there are ways of using Tensorflow in C++ they even have a documentation for it but I can seem to be able to get the library for it. I've checked the build from source instructions but it seems to builds a pip package rather than a library I can link to my project. I also found a tutorial but when I tried it out I ran out of memory and my computer crashed. My question is, how can I actually get the C++ library to work on my project? I do have these requirements, I have to work on windows with Visual Studio in C++. What I would love to is if I could get a pre-compiled DLL that I could just link but I haven't found such a thing and I'm open to other alternatives.
I can't comment so I am writing this as an answer.
If you don't mind using Keras, you could use the package frugally deep. I haven't seen a library myself either, but I came across frugally deep and it seemed easy to implement. I am currently trying to use it, so I cannot guarantee it will work.
You could check out neural2D from here:
https://github.com/davidrmiller/neural2d
It is a neural network implementation without any dependent libraries (all written from scratch).
I would say that the best option is to use cppflow, an easy wrapper that I created to use Tensorflow from C++ easily.
You won't need to install anything, just download the TF C API, and place it somewhere in your computer. You can take a look to the docs to see how to do it, and how to use the library.
The answer seems to be that it is hard :-(
Try this to start. You can follow the latest instructions for building from source on Windows up to the point of building the pip package. But don't do that - do this/these instead:
bazel -config=opt //tensorflow:tensorflow.dll
bazel -config=opt //tensorflow:tensorflow.lib
bazel -config=opt tensorflow:install_headers
That much seems to work fine. The problems really start when you try to use Any of the header files - you will probably get compilation errors, at least with TF version >= 2.0. I have tried:
Build the label_image example (instructions in the readme.md file)
It builds and runs fine on Windows, meaning all the headers and source are there somewhere
Try incorporating that source into Windows console executable: runs into compiler errors due to conflicts with std::min & std::max, probably due to Windows SDK.
Include c_api.h in a Windows console application: won't compile.
Include TF-Lite header files: won't compile.
There is little point investing the lengthy compile time in the first two bazel commands if you can't get the headers to compile :-(
You may have time to invest in resolving these errors; I don't. At this stage Tensorflow lacks sufficient support for Windows C++ to rely on it, particularly in a commercial setting. I suggest exploring these options instead:
If TF-Lite is an option, watch this
Windows ML/Direct ML (requires conversion of TF models to ONNX format)
CPPFlow
Frugally Deep
Keras2CPP
UPDATE: having explored the list above, I eventually found the following worked best in my context (real-time continuous item recognition):
convert models to ONNX format (use tf2onnx or keras2onnx
use Microsoft's ONNX runtime
Even though Microsoft recommends using DirectML where milliseconds matter, the performance of ONNX runtime using DirectML as an execution provider means we can run a 224x224 RGB image through our Intel GPU in around 20ms, which is quick enough for us. But it was still hard finding our way to this answer
All,
I'm working on a new C++ project for an embedded system. Part of the system is some legacy Python code that we'll need to interface too. I've already prototyped a C++ to Python interface using the various PyImport_ImportModule functions etc. provided by Python, and tested this on my host system (Ubuntu 64 bit 17.04).
However, the build system in the new project also tries to build all dependencies, so it builds Python 2.7.13 from source. The problem I am seeing is the interface code that used to work with the host system Python is not working with the newly built from source Python. The error I am seeing is "time.so: undefined symbol: PyExc_ValueError", and the .py file I'm trying to call from C++ does import time as one of the first few lines. I checked and time.so is present in the custom built Python and I did update LD_LIBRARY_PATH to include it, but this didn't help. At the end of the build for Python I do see these warnings, so perhaps one of them is relevant?
Python build finished, but the necessary bits to build these modules were not found:
_bsddb _sqlite3 _ssl
_tkinter bsddb185 bz2
dbm dl gdbm
imageop readline sunaudiodev
zlib
Can anyone suggest what to try next? We are not enabling any special options or using any non standard flags in the Python we're building from source (perhaps some extra settings are required)?
This is usually happening to either:
clean build required or
wrong libpython lib being linked. I would suggest to start with trying clean build and then double check your linking flags (make sure you build for Python-2.7 and link to Python-2.7 and not to say Python-3.* etc).
Also, please see this discussion, it looks like a very similar issue: https://www.panda3d.org/forums/viewtopic.php?t=13222
Edit: this also might be relevant: undefined symbol: PyExc_ImportError when embedding Python in C
I'm trying to install opencv on my machine as explained in the book:
"Packtpub OpenCV Computer Vision with Python Apr 2013"
It says that in order to run kinect you need to compile openCV with some stuff in it, so I downloaded openCV .exe that extracts to a 3.2gb folder and proceeded with all the steps...
Used CMaker, used the compiler MinGW, and everything as the book said
Than it tells me to try running some examples... but when I try to run drawing.py as recommended by the book, and all the others, it says:
python drawing.py
OpenCV Python version of drawing
traceback< most recent call last>:
File "drawing.py", line 7, in
import cv2.cv as cv
ImportError: DLL load failed: Invalid access to memory location.
I saw a lot of people saying this problem is fixed by adding the path to the bin of openCV dlls to path...
how do I find out which dll name is missing so I can find the name of it and find the folder where it is?
I have a x64 computer but the book tells me to install everything x86 because it is harder to get some minor bugs, maybe a version incompatibility between openCV, compiler, cmaker, and python?
I've tried to add a lot of folders to "path" variable and it didn't work
please tell me how I find out which dlls are missing so I can search for them on the computer or some other way to solve this problem because I'm just out of ideas
I don't have a high enough rep to add a comment otherwise I would but something you can do is start python with the -v option.
doing that will add a bit more to the output console and it will cause the python VM to output where it is looking for things when it tries looking for things, especially when failures occur. I've found that to be helpful when trying to hunt problems such as path problems down.
It also sounds like you haven't got your paths setup correctly. Have you looked at ImportError: DLL load failed: %1 is not a valid Win32 application ? If a DLL was expected in a certain location but wasn't loaded or present but then 'called' via a LoadLibrary (without checking to see if it was actually loaded) that might cause such an error. It is probably the fault of the original DLL that failed to verify the subsequent DLL was loaded instead of just assuming the LoadLibrary call succeeded.
In addition to the python -v yourmodule.py
option you could also try running an strace (if you are on unix -- but it doesn't sound like you are). I used to use SoftICE on Windows for digging down deep. If you know the package or the DLL that is at the root of the problem, and have access to a dll export tool, you should be able to get a list of the dependencies the dll needs (external functions it relies on). Then you just need to know or find those functions it relies on from other DLLs. It's been awhile since I used to had do this sort of stuff all the time to locate functions in other DLLs but it is something that is entirely doable from a spelunkers perspective. But there are probably easier ways to go about it.
I'd start with the python -v approach first.
The DLLs you need are almost certainly the ones kept in opencv/build/x64/vc11/bin (this path will be different, but equivalent, based on whatever compiler you used). That's the only folder that needs to be added to your system path.
Make sure that if you have a 32-bit version of Python, you compile OpenCV with a 32-bit compiler. Open up Python and it tells you its architecture.
Also, try installing numpy+mkll instead of numpy from the link of binary packages binary package for numpy+mkll. I had the same error and this solution solved the problem for me.
if you have installed simple numpy, don't worry, open cmd in the directory where you downloaded the new package. use this:
pip install name_of_the_whl_file
or
pip3 install name_of_the_whl_file
it will automatically uninstall the old numpy and install numpy+mkll.
Also, always remmember to add import numpy statement in your code before import cv2 statement.
import numpy
import cv2
Hope it helps.
I have a project with a bunch of C++ and Python/Cython files. Until now I primary developed the C++ part and compiled it to a static library with qmake. Some few methods are exposed with boost::python, and executed from a .py file.
I now wanted to compile the whole thing into a standalone executable.
My question now: what is the best way to do this? I tried to switch to Cython, compile the python files and linking the library. But it seems there is no direct way with distutils/setup.py to compile an executable, only shared libraries.
Is there a way to easily compile both .cpp and .pyx files into an executable at once?
So that I can get rid of a lot of the boost::python wrapper stuff and get a neat mix of c++/python without having to import a shared library and pack the whole stuff with pyinstaller?
You should look into:
pyinstaller (or py2exe) for windows/linux
py2app for osx
Since python is your entry point, you will be able to bundle a stand-alone interpreter, environment, and resource location into an app/exe/binary. It will collect all your library modules into its self-contained site-packages
If you don't use any normal pure py files and only have cython files, then it is also possible to embed an interpreter into one of them as an entry point with an --embed flag to cython:
http://wiki.cython.org/EmbeddingCython
Note, this is a similar "freeze" approach to the previously mentioned packaging options, but doesn't go the extra length to build a self contained env