I have created a gstreamer plugin with an element inside that would generate some data when put in a pipeline (by following the GStreamer Plugin Writer's Guide).
My problem is that I cannot load my plugin in a test application. When I call gst_element_factory_make("myextractor", NULL) the result is always NULL.
More data (I'm not sure this is related):
When I run gst-inspect on my dll I get incomplete output (output generated using cygwin):
beq11049#beqleuleu1nb028 /cygdrive/c/work
$ /cygdrive/c/OSSBuild/GStreamer/v0.10.6/bin/gst-inspect.exe MyProject/Release/gstxyzlibrary.dll
Plugin Details:
Name: myextractor
Description: XYZ Library
Filename: MyProject/Release/gstxyzlibrary.dll
Version: 1.0
License: LGPL
Source module: myextractor
Binary package: MyProject
Origin URL: http://www.example.com/
myextractor: XYZExtractor
1 features:
+-- 1 elements
If I compare this with the avisubtitle addon (from the GStreamer Good Plug-ins package) I get a lot less information for mine.
For example, my plugin says:
1 features:
+-- 1 elements
The avisubtitle plugin says (generated using $ /cygdrive/c/OSSBuild/GStreamer/v0.10.6/bin/gst-inspect.exe avisubtitle):
GObject
+----GstObject
+----GstElement
+----GstAviSubtitle
My question: I need advice on how to debug this / determine what I am missing (enabling debugging output, settings and paths to check and so on). My test code (the call to gst_element_factory_make) is written in a Songbird adon, but I get the same results if I put my code in a separate executable.
It might be because the loader can't find your plugin, you should check that your plugin is in the shared library path:
Make sure you've set the LD_LIBRARY_PATH environment variable to the directory containing your compiled plugin.
In cygwin, add this to your .profile, or run it before you run your program:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/cygdrive/c/path/to/your/plugin/"
You could also use the -L linker option to specify a search path at compile time.
I am not sure how to answer your question, because there might be several things missing. I think about unresolved symbol depedencies (although unlikely, since you can actually inspect your plugin) or wrong registration, somehow (it's also difficult to get it wrong), perhaps issue with songbird or windows that I ignore.
However, I just want to clear out one thing: GStreamer has "plug-ins" (aka dynamically loadable library) that can contain one or more elements. When you gst-introspect the plug-in or the element, you get different outputs! In the case of "avisubtitle", it's an element, and you can read the class hierarchy. If you introspect the plugin (or the library) "avi" or "/usr/lib/gstreamer-0.10/libgstavi.so" (on Unixes), you get the list of elements it contains and some common properties (version, project etc..).
For example, gst-introspect XYZ (XYZ a library or a path to a library) is different than gst-introspect XYZ (XYZ the name of the element in the XYZ library)
Perhaps that will helps you. good luck! Oh, and errors/warning can be displayed with the environment variable GST_DEBUG=*:2
Related
I'm compiling Linux libraries (for Android, using NDK's g++, but I bet my question makes sense for any Linux system). When delivering those libraries to partners, I need to mark them with a version number. I must also be able to access the version number programatically (to show it in an "About" dialog or a GetVersion function for instance).
I first compile the libraries with an unversioned flag (version 0.0) and need to change this version to a real one when I'm done testing just before sending it to the partner. I know it would be easier to modify the source and recompile, but we don't want to do that (because we should then test everything again if we recompile the code, we feel like it would be less error prone, see comments to this post and finally because our development environment works this way: we do this process for Windows binaries: we set a 0.0 resources version string (.rc) and we later change it by using verpatch...we'd like to work with the same kind of process when shipping Linux binaries).
What would be the best strategy here?
To summarize, requirements are:
Compile binaries with "unset" version (0.0 or anything else)
Be able to modify this "unset" version to a specific one without having to recompile the binary (ideally, run a 3rd party tool command, as we do with verpatch under Windows)
Be able to have the library code retrieve it's version information at runtime
If your answer is "rename the .so", then please provide a solution for 3.: how to retrieve version name (i.e.: file name) at runtime.
I was thinking of some solutions but have no idea if they could work and how to achieve them.
Have a version variable (one string or 3 int) in the code and have a way to change it in the binary file later? Using a binary sed...?
Have a version variable within a resource and have a way to change it in the binary file later? (as we do for win32/win64)
Use a field of the .so (like SONAME) dedicated to this and have a tool allowing to change it...and make it accessible from C++ code.
Rename the lib + change SONAME (did not find how this can be achieved)...and find a way to retrieve it from C++ code.
...
Note that we use QtCreator to compile the Android .so files, but they may not rely on Qt. So using Qt resources is not an ideal solution.
I am afraid you started to solve your problem from the end. First of all SONAME is provided at link time as a parameter of linker, so in the beginning you need to find a way to get version from source and pass to the linker. One of the possible solutions - use ident utility and supply a version string in your binary, for example:
const char version[] = "$Revision:1.2$"
this string should appear in binary and ident utility will detect it. Or you can parse source file directly with grep or something alike instead. If there is possibility of conflicts put additional marker, that you can use later to detect this string, for example:
const char version[] = "VERSION_1.2_VERSION"
So you detect version number either from source file or from .o file and just pass it to linker. This should work.
As for debug version to have version 0.0 it is easy - just avoid detection when you build debug and just use 0.0 as version unconditionally.
For 3rd party build system I would recommend to use cmake, but this is just my personal preference. Solution can be easily implemented in standard Makefile as well. I am not sure about qmake though.
Discussion with Slava made me realize that any const char* was actually visible in the binary file and could then be easily patched to anything else.
So here is a nice way to fix my own problem:
Create a library with:
a definition of const char version[] = "VERSIONSTRING:00000.00000.00000.00000"; (we need it long enough as we can later safely modify the binary file content but not extend it...)
a GetVersion function that would clean the version variable above (remove VERSIONSTRING: and useless 0). It would return:
0.0 if version is VERSIONSTRING:00000.00000.00000.00000
2.3 if version is VERSIONSTRING:00002.00003.00000.00000
2.3.40 if version is VERSIONSTRING:00002.00003.00040.00000
...
Compile the library, let's name it mylib.so
Load it from a program, ask its version (call GetVersion), it returns 0.0, no surprise
Create a little program (did it in C++, but could be done in Python or any other languauge) that will:
load a whole binary file content in memory (using std::fstream with std::ios_base::binary)
find VERSIONSTRING:00000.00000.00000.00000 in it
confirms it appears once only (to be sure we don't modify something we did not mean to, that's why I prefix the string with VERSIONSTRING, to make it more unic...)
patch it to VERSIONSTRING:00002.00003.00040.00000 if expected binary number is 2.3.40
save the binary file back from patched content
Patch mylib.so using the above tool (requesting version 2.3 for instance)
Run the same program as step 3., it now reports 2.3!
No recompilation nor linking, you patched the binary version!
How to get from C/C++ extension source code to a pyd file for windows (or other item that I could import to Python)?
edit: The specific library that I wanted to use (BRISK) was included in OpenCV 2.4.3 so my need for this skill went away for the time being. In case you came here looking for BRISK, here is a simple BRISK in Python demo that I posted.
I have the Brisk source code (download) that I would like to build and use in my python application. I got as far as generating a brisk.pyd file... but it was 0 bytes. If there is a better / alternative way to aiming for a brisk.pyd file, then of course I am open to that as well.
edit: Please ignore all the attempts in my original question below and see my answer which was made possible by obmarg's detailed walkthrough
Where am I going wrong?
Distutils without library path: First I tried to build the source as is with distutils and the following setup.py (I have just started learning distutils so this is a shot in the dark). The structure of the BRISK source code is at the bottom of this question for reference.
from distutils.core import setup, Extension
module1 = Extension('brisk',
include_dirs = ['include', 'C:/opencv2.4/build/include', 'C:/brisk/thirdparty/agast/include'],
#libraries = ['agast_static', 'brisk_static'],
#library_dirs = ['win32/lib'],
sources = ['src/brisk.cpp'])
setup (name = 'BriskPackage',
ext_modules = [module1])
That instantly gave me the following lines and a 0 byte brisk.pyd somewhere in the build folder. So close?
running build
running build_ext
Distutils with library path: Scratch that attempt. So I added the two library lines that are commented out in the above setup.py. That seemed to go ok until I got this linking error:
creating build\lib.win32-2.7
C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:win32/lib /LIB
PATH:C:\Python27_32bit\libs /LIBPATH:C:\Python27_32bit\PCbuild agast_static.lib brisk_static.lib /EXPORT:initbrisk build
\temp.win32-2.7\Release\src/brisk.obj /OUT:build\lib.win32-2.7\brisk.pyd /IMPLIB:build\temp.win32-2.7\Release\src\brisk.
lib /MANIFESTFILE:build\temp.win32-2.7\Release\src\brisk.pyd.manifest
LINK : error LNK2001: unresolved external symbol initbrisk
build\temp.win32-2.7\Release\src\brisk.lib : fatal error LNK1120: 1 unresolved externals
error: command '"C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\link.exe"' failed with exit status 1120
Uncontrolled flailing: I thought maybe the libraries needed to be built, so I did a crash course (lots of crashing) with cmake + mingw - mingw + vc++ express 2010 as follows:
cmake gui: source: c:/brisk, build: c:/brisk/build
cmake gui: configure for Visual Studio 10
cmake gui: use default options and generate (CMAKE_BACKWARDS_COMPATIBILITY, CMAKE_INSTALL_PREFIX, EXECUTABLE_OUTPUT_PATH, LIBRARY_OUTPUT_PATH)
VC++ Express 10: Change to Release and build the solution generated by cmake and get about 20 pages of what look like non-critical warnings followed by all succeeded. Note - no dlls are generated by this. It does generate the following libraries of similar size to the ones included with the download:
win32/lib/Release/
agast_static.lib
brisk_static.lib
Further flailing.
Relevant BRISK source file structure for reference:
build/ (empty)
include/brisk/
brisk.h
hammingsse.hpp
src
brisk.cpp
demo.cpp
thirdparty/agast/
include/agast/
agast5_8.h ....
cvWrapper.h
src/
agast5_8.cc ...
CMakeLists.txt
win32/
bin/
brisk.mexw32
opencv_calib3d220.dll ...
lib/
agast_static.lib
brisk_static.lib
CMakeLists.txt
FindOpenCV.cmake
Makefile
Are you sure that this brisk library even exports python bindings? I can't see any reference to it in the source code - it doesn't even seem to import python header files. This would certainly explain why you've not had much success so far - you can't just compile plain C++ code and expect python to interface with it.
I think your second distutils example is closest to correct - it's obviously compiling things and getting to the linker stage, but then you encounter this error. That error just means it can't find a function named initbrisk which I'm guessing would be the top level init function for the module. Again this suggests that you're trying to compile a python module from code that isn't meant for it.
If you want to wrap the C++ code in a python wrapper yourself you could have a look at the official documentation on writing c/c++ extensions. Alternatively you could have a look into boost::python, SIP or shiboken which try to somewhat (or completely) automate the process of making python extensions from C++ code.
EDIT: Since you seem to have made a decent amount of effort to solve the problem yourself and have posted a good question, I've decided to give a more detailed response on how to go about doing this.
Quick Tutorial On Wrapping C++ Libraries Using boost::python
Personally I've only ever used boost::python for stuff like this, so I'll try and give you a good summary of how to go about doing that. I'm going to assume that you're using Visual C++ 2010. I'm also going to assume that you've got a 32bit version of python installed, as I believe the boost pro libraries only provide 32bit binaries.
Installing boost
First you'll need to grab a copy of the boost library. The easiest way to do this is to download an installer from the boost pro website. These should install all the header files and binary files that are required for using the boost c++ library on windows. Take note of where you install these files to, as you'll need them later on - it might be best to install to a path without a space in it. For easyness I'm going to assume you put these files in C:\boost but you can substitute that for the path you actually used.
Alternatively, you can follow these instructions to build boost from source. I'm not 100% sure, but it might be the case that you need to do this in order to get a version of boost::python that is compatible with the version of python you have installed.
Setting up a visual studio project
Next, you'll want to setup a visual studio project for brisk.pyd. If you open visual studio, go to New -> Project then find the option for Win32 Project. Set up your location etc. and click ok. In the wizard that appears select a DLL project type, and then tick the empty project checkbox.
Now that you've created your project, you'll need to set up the include & library paths to allow you to use python, boost::python and the brisk.lib file.
In Visual Studios solution explorer, right click on your project, and select properties from the menu that appears. This should open up the property pages for your project. Go to the Linker -> General section and look for the Additional Library Directories section. You'll need to fill this in with the paths to the .lib files for boost, python and your brisk_static.lib. Generally these can be found in lib (or libs) subdirectories of
wherever you've installed the libraries. Paths are seperated with semicolons. I've attached a screenshot of my settings below:
Next, you'll need to get visual studio to link to the .lib files. These sections can be found in the Additional Dependencies field of the Linker -> Input section of the properties. Again it's a semicolon delimited list. You should need to add in libraries for python (in my case this is python27.lib but this will vary by version) and brisk_static.lib. These do not require the full path as you added that in the previous stage. Again, here's a screenshot:
You may also need to add the boost_python library file but I think boost uses some header file magic to save you the trouble. If I'm incorrect then have a look in you boost library path for a file named similar to boost_python-vc100-mt.lib and add that in.
Finally, you'll need to setup the include paths to allow your project to include the relevant C++ header files. To get the relevant settings to appear in project properties, you'll need to add a .cpp file to your project. Right click the source files folder in your solution explorer, and then go to add new item. Select a C++ File (.cpp) and name it main.cpp (or whatever else you want).
Next, go back to your project properties and go to C/C++ -> General. Under the additional libraries directory you need to add the include paths for brisk, python and boost. Again, semicolons for seperators, and again here's a screenshot:
I suspect that you might need to update these settings to include the opencv2 & agast libraries as well but I'll leave that as a task for you to figure out - it should be much the same process.
Wrapping existing c++ classes with boost::python.
Now comes the slightly trickier bit - actually writing C++ to wrap your brisk library in boost python. You can find a tutorial for this here but i'll try and go over it a bit as well.
This will be taking place in the main.cpp file you created earlier. First, add the relevant include statements you'll need at the top of the file:
#include <brisk/brisk.h>
#include <Python.h>
#include <boost/python.hpp>
Next, you'll need to declare your python module. I'm assuming you'd want this to be called brisk, so you do something like this:
BOOST_PYTHON_MODULE(brisk)
{
}
This should tell boost::python to create a python module named brisk.
Next it's just a case of going through all the classes & structs that you want to wrap and declaring boost python classes with them. The declerations of the classes should all be contained in brisk.h. You should only wrap the public members of a class, not any protected or private members. As a quick example, I've done a couple of the structs here:
BOOST_PYTHON_MODULE(brisk)
{
using namespace boost::python;
class_< cv::BriskPatternPoint >( "BriskPatternPoint" )
.def_readwrite("x", &cv::BriskPatternPoint::x)
.def_readwrite("y", &cv::BriskPatternPoint::y)
.def_readwrite("sigma", &cv::BriskPatternPoint::sigma);
class< cv::BriskScaleSpace >( "BriskScaleSpace", init< uint8_t >() )
.def( "constructPyramid", &cv::BriskScaleSpace::constructPyramid );
}
Here I have wrapped the cv::BriskPatternPoint structure and the cv::BriskScaleSpace class. Some quick explanations:
class_< cv::BriskPatternPoint >( "BriskPatternPoint" ) tells boost::python to declare a class, using the cv::BriskPatternPoint C++ class, and expose it as BriskPatternPoint in python.
.def_readwrite("y", &cv::BriskPatternPoint::y) adds a readable & writeable property to the BriskPatternPoint class. The property is named y, and will map to the BriskPatternPoint::y c++ field.
class< cv::BriskScaleSpace >( "BriskScaleSpace", init< uint8_t >() ) declares another class, this time BriskScaleSpace but also provides a constructor that accepts a uint8_t (an unsigned byte - which should just map to an integer in python, but I'd be careful to not pass in one greater than 255 bytes - I don't know what would happen in that situation)
The following .def line just declares a function - boost::python should (I think) be able to determine the argument types of functions automatically, so you don't need to provide them.
It's probably worth noting that I haven't actually compiled any of these examples - they might well not work at all.
Anyway, to get this fully working in python it should just be a case of doing similar for every structure, class, property & function that you want accessible from python - which is potentially quite a time consuming task!
If you want to see another example of this in action, I did this here to wrap up this class
Building & using the extension
Visual studio should take care of building the extension - then using it is just a case of taking the .DLL and renaming it to .pyd (you can get VS to do this for you, but I'll leave that up to you).
Then you just need to copy your python file to somewhere on your python path (site-packages for example), import it and use it!
import brisk
patternPoint = brisk.BriskPatternPoint()
....
Anyway, I have spent a good hour or so writing this out - so I'm going to stop here. Apologies if I've left anything out or if anything isn't clear, but I'm doing this mostly from memory. Hopefully it's been of some help to you. If you need anything clarified please just leave a comment, or ask another question.
In case someone needs it, this what I have so far. Basically a BriskFeatureDetector that can be created in Python and then have detect called. Most of this is just confirming/copying what obmarg showed me, but I have added the details that get all the way to the pyd library.
The detect method is still incomplete for me though since it does not convert data types. Anyone who knows a good way to improve this, please do! I did find, for example, this library which seems to convert a numpy ndarray to a cv::Mat, but I don't have the time to figure out how to integrate it now. There are also other data types that need to be converted.
Install OpenCV 2.2
for the setup below, I installed to C:\opencv2.2
Something about the API or implementation has changed by version 2.4 that gave me problems (maybe the new Algorithm object?) so I stuck with 2.2 which BRISK was developed with.
Install Boost with Boost Python
for the setup below, I installed to C:\boost\boost_1_47
Create a Visual Studio 10 Project:
new project --> win32
for the setup below, I named it brisk
next --> DLL application type; empty project --> finished
at the top, change from Debug Win32 to Release Win32
Create main.cpp in Source Files
Do this before the project settings so the C++ options become available in the project settings
#include <boost/python.hpp>
#include <opencv2/opencv.hpp>
#include <brisk/brisk.h>
BOOST_PYTHON_MODULE(brisk)
{
using namespace boost::python;
//this long mess is the only way I could get the overloaded signatures to be accepted
void (cv::BriskFeatureDetector::*detect_1)(const cv::Mat&,
std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint>>&,
const cv::Mat&) const
= &cv::BriskFeatureDetector::detect;
void (cv::BriskFeatureDetector::*detect_vector)(const std::vector<cv::Mat, std::allocator<cv::Mat>>&,
std::vector< std::vector< cv::KeyPoint, std::allocator<cv::KeyPoint>>, std::allocator< std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint>>>>&,
const std::vector<cv::Mat, std::allocator<cv::Mat>>&) const
= &cv::BriskFeatureDetector::detect;
class_< cv::BriskFeatureDetector >( "BriskFeatureDetector", init<int, int>())
.def( "detect", detect_1)
;
}
Project Settings (right-click on the project --> properties):
Includes / Headers
Configuration Properties --> C/C++ --> General
add to Additional Include Directories (adjust to your own python / brisk / etc. base paths):
C:\opencv2.2\include;
C:\boost\boost_1_47;
C:\brisk\include;C:\brisk\thirdparty\agast\include;
C:\python27\include;
Libraries (linker)
Configuration Properties --> Linker --> General
add to Additional Library Directories (adjust to your own python / brisk / etc. base paths):
C:\opencv2.2\lib;
C:\boost\boost_1_47\lib;
C:\brisk\win32\lib;
C:\python27\Libs;
Configuration Properties --> Linker --> Input
add to Additional Dependencies (adjust to your own python / brisk / etc. base paths):
opencv_imgproc220.lib;opencv_core220.lib;opencv_features2d220.lib;
agast_static.lib; brisk_static.lib;
python27.lib;
.pyd output instead of .dll
Configuration Properties --> General
change Target Extension to .pyd
Build and rename if necessary
Right-click on the solution and build/rebuild
you may need to rename the output from "Brisk.pyd" to "brisk.pyd" or else python will give you errors about not being able to load the DLL
Make brisk.pyd available to python by putting it in site packages or by putting a .pth file that links to its path
Update Path environment variable
In windows settings, make sure the following are included in your path (again, adjust to your paths):
`C:\boost\boost_1_47\lib;C:\brisk\win32\bin`
My environment is Linux CentOS 6.2. And I've a source control system like svn/hg/git etc. My source code is C/C++.
I want to check in the build binary to keep which binary is release to customer.
And I assume build binary's checksum will different when source code changed.
So, I could reverse trace which binary is build from which version.
Is it possible, what's the tricks I must follow?
I've seen some executable display the revision when execute with -version option.
But I'm wonder how to prevent write wrong -version string into the executable.
If I keep a md5.txt and check-in it instead of check in binary.
How could I make sure I can build the same md5 executable again?
Sorry, for clearing my question and preventing another unexpected answer, I prefer a answer like:
Keep a md5sum.txt in scm when release a new version to user.
Keep binary separate from your SCM.
To rebuild the same md5sum binary you should make sure
write symbol into binary when make(eg. by -DVERSION="1.x")
show the VERSION string to user
remove all $Id, that let your SCM run slower.
keep same CPU & OS & compiler & library environment
...
Create strings within a .cpp file as thus:
static const char version[] = "#(#) $Id$";
where $Id$ is obtained from SVN
Use the what command (see the manual page). It will obtain these strings from the binary so you can check.
Is this an executable or a shared library? If the latter, you could export a function that would return the version (number, string, your choice). Then dlopen(), dlsym(), and execute the function.
For executable ELF binaries, you might be able to implant some data in the binary that can be queried using the 'nm' utility.
If you'll use Subversion, SvnRev will do most work for you (no md5 in repos, repo hold sources, binary - resource with revision-id)
For Mercurial, you can get idea for version sting from VersioningWithMake wiki, and in order to get string like result of git describe, instead of simple template {node|short} for HGVERSION you can use something as {latesttag}+{latesttagdistance}:{node|short}, showing (example) 1.3+11:8a226f0f99aa
I am currently learning how to create Firefox addons using XPCOM and I want to know how to include thirdparty libraries to develop them. I followed a few tutorials to compile .xpt and .dll from c++ files (noted in: https://developer.mozilla.org/en/How_to_build_a_binary_XPCOM_component_using_Visual_Studio and http://nerdlife.net/building-a-c-xpcom-component-in-windows/) and I am not sure how we are suppose to include the files into an addon project (which gets packaged into .xpi file).
I am using Opencv (c++) to do image conversions via the addon such as resizing a very large image (3mb high res png) down to something small and simple (such as 600X800 200kb jpg) that will be done by my addon. I know that Opencv is written in c++ and xpcom can compile c++ code into a dll and xpt. I read a couple of tutorials but most of them points to adding these files (xpt and dll) in "C:/Program Files(x86)/Mozilla Firefox/components" instead of the addons "components/" folder (which is not how addons work, I believe). Though the Mozilla page (listed above) does include something about "{app}/components" and "{app}/application.ini" folders but I have never read anything like this, so therefore I am confused how this is done.
My other option is making the Opencv methods into an executable and then running it separately (similar to how MemoryFox addon runs an executable to clear Firefox's memory) and use that to conduct image resizing.
I am really not sure how to do this (I am new the addon development) and I would like to know how to accomplish this (using 3rd party C++ libs into the addons). An example, tutorial or explanation is fine to get me started.
Thanks in advance.
Edit: I would also like to inform that I have read threw most of https://developer.mozilla.org/en/Creating_XPCOM_Components (picked and choose topics) and it does not tell me how to achieve the above.
There are several places you can store an addon and inform FireFox of it's existence.
Mostly it comes down to how you want to distribute it.
With MS Windows the loadLibrary process will search the current directory then the path.
Simply locating your xpcom.dll alongside 3rdparty.dll in the components folder of your pluggin should work ok.
XPCOM is like windows COM but for across platforms.
Your designing to meet the requirements of interfaces.
Your addon may not need to define it's own interface (.idl -compiles-> .xpt, .h) but if it does from firefox 4 you will need to list this in the chrome.manifest.
Once compiled you will need to list your dll in the chrome.manifest also.
taking your example prototype function
long Add(in long a, in long b);
baz_1.idl might look like
#include "nsISupports.idl"
[scriptable, uuid(F0F0F0F0-AAAA-BBBB-CCCC-111111111111)]
interface iBaz : nsISupports
{
long Add(in long a, in long b);
};
generate your header and xpi distrib
$(GECKOBIN_PATH)/xpidl -m header -I $(GECKOSDK_PATH)/idl -e baz_1.idl
$(GECKOBIN_PATH)/xpidl -m typelib -I $(GECKOSDK_PATH)/idl -e baz_1.idl
Link the contract guid to the contract name in C++ for FF4
NS_DEFINE_NAMED_CID(BAZ_CID); // defined in baz.h generated from baz_1.idl
static const mozilla::Module::ContractIDEntry kSLMozContracts[] = {
{ "#foo.bar.com/baz;1", &kBAZ_CID },
{ NULL }
};
usage in C++
nsCOMPtr<iBaz> baz = do_CreateInstance("#foo.bar.com/baz;1",&rv);
usage in JavaScript
var baz = components.classes["#foo.bar.com/baz"] .createInstance(Components.interfaces.iBaz);
Example.
We have a product that also provides a firefox addon, so we don't ship an xpi, but the addon sits in a subfolder of the product.
Product in
c:\program files\foo\
addon in - lets call this foobar
c:\program files\foo\bar
register the addon
HKLM\SOFTWARE\Mozilla\Firefox\Extensions\
"{GUID}"="c:\program files\foo\bar"
so then the layout of the addon is
foobar\chrome.manifest
foobar\install.rdf
foobar\components\baz_1.xpt
foobar\components\baz_1_32.dll
foobar\components\baz_1_64.dll
foobar\components\someOtherWorker.dll
foobar\chrome\ui.jar
In FF < 4 you didn't need to list binaries in the manifest, things in components folder are loaded and checked for functions to see if they are xpcom, so a chrome manifest might have looked like
content baz jar:chrome/ui.jar!/content/baz/
skin baz classic/1.0 jar:chrome/ui.jar!/skin/classic/baz/
From FF 4 the chrome manifest needs to list your binary files, so it then looks like
(we only support 64 bit after FF 3.5)
content baz jar:chrome/ui.jar!/content/baz/
skin baz classic/1.0 jar:chrome/ui.jar!/skin/classic/baz/
interfaces components/baz_1.xpt
binary-component components/baz_1_32.dll ABI=WINNT_x86-msvc
binary-component components/baz_1_64.dll ABI=WINNT_x86_64-msvc appversion>=3.5
FireFox 5 (Out now) has changed
the formats of xpt files - so your interfaces won't be found by FF5 unless you build with that SDK.
the version of the xpcom registration - so FF5 won't use a dll built for FF4 as the xpcom version isn't high enough.
The tutorial you refer to is about developing applications, not extensions. So application.ini doesn't apply in your case. Extensions have a manifest file called chrome.manifest, starting with Firefox 4 all XPCOM components and XPT files need to be registered there - or they will be ignored. It no longer matters where the component is located though components/ would still be the common place for them. https://developer.mozilla.org/en/XPCOM/XPCOM_changes_in_Gecko_2.0#Component_registration should be a good starting point - that should explain what you need to write there.
Note that the source code in the tutorials you were reading is outdated as well. Current code examples are linked to from https://developer.mozilla.org/en/XPCOM/XPCOM_changes_in_Gecko_2.0#Binary_components. And you should use XULRunner SDK 2.0 (corresponds to Firefox 4) or higher. Sorry, you arrived at a bad time - that's the biggest change to XPCOM in the past 10 years.
I am attempting to put together a simple c++ test project that uses an embedded python 3.2 interpreter. The project builds fine but Py_Initialize raises a fatal error:
Fatal Python error: Py_Initialize: unable to load the file system codec
LookupError: no codec search functions registered: can't find encoding
Minimal code:
#include <Python.h>
int main (int, char**)
{
Py_Initialize ();
Py_Finalize ();
return 0;
}
The OS is 32bit Vista.
The python version used is a python 3.2 debug build, built from sources using VC++ 10.
The python_d.exe file from the same build runs without any problems.
Could someone explain the problem and how to fix it? My own google-fu fails me.
EDIT 1
After going through the python source code I've found that, as the error says, no codec search functions have been registered. Both codec_register and PyCodec_Register are as they should be. It's just that nowhere in the code are any of these functions called.
I don't really know what this means as I still have no idea when and from where these functions should have been called. The code that raises the error is entirely missing from the source of my other python build (3.1.3).
EDIT 2
Answered my own question below.
Check the PYTHONPATH and PYTHONHOME environment variables and make sure they don't point to Python 2.x.
http://bugs.python.org/issue11288
Parts of this have been mentioned before, but in a nutshell this is what worked for my environment where I have multiple Python installs and my global OS environment set-up to point to a different install than the one I attempt to work with when encountering the problem.
Make sure your (local or global) environment is fully set-up to point to the install you aim to work with, e.g. you have two (or more) installs of, let's say a python27 and python33 (sorry these are windows paths but the following should be valid for equivalent UNIX-style paths just as well, please let me know about anything I'm missing here (probably the DLLs path might differ)):
C:\python27_x86
C:\python33_x64
Now, if you intend to work with your python33 install but your global environment is pointing to python27, make sure you update your environment as such (while PATH and PYTHONHOME may be optional (e.g. if you temporarily work in a local shell)):
PATH="C:\python33_x64;%PATH%"
PYTHONPATH="C:\python33_x64\DLLs;C:\python33_x64\Lib;C:\python33_x64\Lib\site-packages"
PYTHONHOME=C:\python33_x64
Note, that you might need/want to append any other library paths to your PYTHONPATH if required by your development environment, but having your DLLs, Lib and site-packages properly set-up is of prime importance.
Hope this helps.
The core reason is quite simple: Python does not find its modules directory, so it can of course not load encodings, too
Python doc on embedding says "Py_Initialize() calculates the module search path based upon its best guess" ... "In particular, it looks for a directory named lib/pythonX.Y"
Yet, if the modules are installed in (just) lib - relative to the python binary - above guess is wrong.
Although docs says that PYTHONHOME and PYTHONPATH are regarded, we observed that this was not the case; their actual presence or content was completely irrelevant.
The only thing that had an effect was a call to Py_SetPath() with e.g. [path-to]\lib as argument before Py_Initialize().
Sure this is only an option for an embedding scenario where one has direct access and control over the code; with a ready-made solution, special steps may be necessary to solve the issue.
Ran into the same thing trying to install brew's python3 under Mac OS! The issue here is that in Mac OS, homebrew puts the "real" python a whole layer deeper than you think. You would think from the homebrew output that
$ echo $PYTHONHOME
/usr/local/Cellar/python3/3.6.2/
$ echo $PYTHONPATH
/usr/local/Cellar/python3/3.6.2/bin
would be correct, but invoking $PYTHONPATH/python3 immediately crashes with the abort 6 "can't find encodings." This is because although that $PYTHONHOME looks like a complete installation, having a bin, lib etc, it is NOT the actual Python, which is in a Mac OS "Framework". Do this:
PYTHONHOME=/usr/local/Cellar/python3/3.x.y/Frameworks/Python.framework/Versions/3.x
PYTHONPATH=$PYTHONHOME/bin
(substituting version numbers as appropriate) and it will work fine.
From python3k, the startup need the encodings module, which can be found in PYTHONHOME\Lib directory.
In fact, the API Py_Initialize () do the init and import the encodings module.
Make sure PYTHONHOME\Lib is in sys.path and check the encodings module is there.
I had this issue with python 3.5, anaconda 3, windows 7 32 bit. I solved it by moving my pythonX.lib and pythonX.dll files into my working directory and calling
Py_SetPythonHome(L"C:\\Path\\To\\My\\Python\\Installation");
before initialize so that it could find the headers that it needed, where my path was to "...\Anaconda3\". The extra step of calling Py_SetPythonHome was required for me or else I'd end up getting other strange errors where python import files.
I just ran into the exact same problem (same Python version, OS, code, etc).
You just have to copy Python's Lib/ directory in your program's working directory ( on VC it's the directory where the .vcproj is )
There appears to be something going wrong with the release build either failing to include the appropriate codecs or else misidentifying the codec to use for system APIs. Since the python_d executable is working, what does that return for os.getfsencoding()? (Use the C API to call that between your Initialize/Finalize calls)
I had the same issue and found this question. However from the answers here I was not able to solve my problem. I started debugging the cpython code and thought that I might be discovered a bug. Therefore I opened a issue on the python issue tracker.
My mistake was that I did not understand that Py_SetPath clears all inferred paths.
So one needs to set all paths when calling this function.
Link to the issue conversation
For completion I also copied the most important part of the conversation below.
My original issue text
I compiled the source of CPython 3.7.3 myself on Windows with Visual Studio 2017 together with some packages like e.g numpy. When I start the Python Interpreter I am able to import and use numpy. However when I am running the same script via the C-API I get an ModuleNotFoundError.
So the first thing I did, was to check if numpy is in my site-packages directory and indeed there is a folder named numpy-1.16.2-py3.7-win-amd64.egg. (Makes sense because the python interpreter can find numpy)
The next thing I did was to get some information about the sys.path variable created when running the script via the C-API.
#### sys.path content ####
C:\Work\build\product\python37.zip
C:\Work\build\product\DLLs
C:\Work\build\product\lib
C:\PROGRAM FILES (X86)\MICROSOFT VISUAL STUDIO\2017\PROFESSIONAL\COMMON7\IDE\EXTENSIONS\TESTPLATFORM
C:\Users\rvq\AppData\Roaming\Python\Python37\site-packages
Examining the content of sys.path I noticed two things.
C:\Work\build\product\python37.zip has the correct path 'C:\Work\build\product\'. There was just no zip file. All my files and directory were unpacked. So I zipped the files to an archive named python37.zip and this resolved the import error.
C:\Users\rvq\AppData\Roaming\Python\Python37\site-packages is wrong it should be C:\Work\build\product\Lib\site-packages but I dont know how this wrong path is created.
The next thing I tried was to use Py_SetPath(L"C:/Work/build/product/Lib/site-packages") before calling Py_Initialize(). This led to
Fatal Python Error 'unable to load the file system encoding'
ModuleNotFoundError: No module named 'encodings'
I created a minimal c++ project with exact these two calls and started to debug Cpython.
int main()
{
Py_SetPath(L"C:/Work/build/product/Lib/site-packages");
Py_Initialize();
}
I tracked the call of Py_Initialize() down to the call of
static int
zipimport_zipimporter___init___impl(ZipImporter *self, PyObject *path)
inside of zipimport.c
The comment above this function states the following:
Create a new zipimporter instance. 'archivepath' must be a path-like
object to a zipfile, or to a specific path inside a zipfile. For
example, it can be '/tmp/myimport.zip', or
'/tmp/myimport.zip/mydirectory', if mydirectory is a valid directory
inside the archive. 'ZipImportError' is raised if 'archivepath'
doesn't point to a valid Zip archive. The 'archive' attribute of the
zipimporter object contains the name of the zipfile targeted.
So for me it seems that the C-API expects the path set with Py_SetPath to be a path to a zipfile. Is this expected behaviour or is it a bug?
If it is not a bug is there a way to changes this so that it can also detect directories?
PS: The ModuleNotFoundError did not occur for me when using Python 3.5.2+, which was the version I used in my project before. I also checked if I had set any PYTHONHOME or PYTHONPATH environment variables but I did not see one of them on my system.
Answer
This is probably a documentation failure more than anything else. We're in the middle of redesigning initialization though, so it's good timing to contribute this feedback.
The short answer is that you need to make sure Python can find the Lib/encodings directory, typically by putting the standard library in sys.path. Py_SetPath clears all inferred paths, so you need to specify all the places Python should look. (The rules for where Python looks automatically are complicated and vary by platform, which is something I'm keen to fix.)
Paths that don't exist are okay, and that's the zip file. You can choose to put the stdlib into a zip, and it will be found automatically if you name it the default path, but you can also leave it unzipped and reference the directory.
A full walk through on embedding is more than I'm prepared to type on my phone. Hopefully that's enough to get you going for now.
I had the problem and was tinkering with different solutions mentioned here. Since I was running my project from Visual Studio, apparently, I needed to set the environment path inside Visual Studio and not the system path.
Adding a simple PYTHONHOME=PATH\TO\PYTHON\DIR in the project solution\properties\environment solved the problem.
For me this happened when I updated Python 64 bit from 3.6.4 to 3.6.5. It threw some error like "unable to extract python.dll. Do you have permissions."
Pycharm also failed to load interpreter, even though I reloaded it in settings. Running python command gave same error, with and without administrator mode.
Reason
There was error in installation of Python, include folder in python installation directory C:\Users\USERNAME\AppData\Local\Programs\Python\Python36 was missing
Reinstalling Python also dint solve the issue.(Not removal and install)
Solution
Uninstall Python and Install of Python again.
Because running installer was just extracting same files excluding include folder
In my cases, for windows, if you have multiple python versions installed, if PYTHONPATH is pointing to one version the other ones didn't work. I found that if you just remove PYTHONPATH, they all work fine
For those working in Visual Studio simply add the include, Lib and libs directories to the Include Directories and Library Directories under
Projects Properties -> Configuration Properties > VC++ Directories :
For example I have Anaconda3 on my system and working with Visual Studio 2015 This is how the settings looks like (note the Include and Library directories) :
Edit:
As also pointed out by bossi setting PYTHONPATH in your user Environment Variables section seems necessary.
a sample input can be like this (in my case):
C:\Users\Master\Anaconda3\Lib;C:\Users\Master\Anaconda3\libs;C:\Users\Master\Anaconda3\Lib\site-packages;C:\Users\Master\Anaconda3\DLLs
is necessary it seems.
Also, you need to restart Visual Studio after you set up the PYTHONPATH in your user Environment Variables for the changes to take effect.
Also note that :
Make sure the PYTHONHOME environment variable is set to the Python
interpreter you want to use. The C++ projects in Visual Studio rely on
this variable to locate files such as python.h, which are used when
creating a Python extension.
So, for some reason the python dll fails to locate the encodings module. The python.exe executable apparently finds it because it has the expected relative path. Modifying the search path works.
The reason for all of this? Don't know but at least it works. I highly suspect a typo on my part somewhere, that's usually the reason for odd bugs it seems.