I followed this tutorial to implement Yolo object detector: https://github.com/thtrieu/darkflow/
and I completed it successfully.
The created .pb file can be used to migrate the graph to mobile devices (JAVA / C++ / Objective-C++). The name of input tensor and output tensor are respectively 'input' and 'output'.
I want to load the network with OpenCV (c++). The readNetFromTensorflow() method needs two files: .pb and .pbtxt. The latter is not generated by the implementation indicated above.
Similarly, to use the readNetFromDarknet() method it is necessary to have two files: .cfg and .weights. The latter is not generated by the implementation indicated above.
So, how can I migrate the yolo network from python to c++ using opencv?
I also tried to generate the .pbtxt file directly from the .pb file but the readNetFromTensorflow() method is not successful (A generic exception is generated without useful information)
Reference exception thrown:
[Exception thrown at 0x00007FFFB80C9129 in Object_detection_inference_cpp.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000CBC18FDC90.]
Thanks in advance.
This is the code I have used to convert .pb file into .pbtxt file:
import tensorflow as tf
from google.protobuf import text_format
from tensorflow.python.platform import gfile
def graphdef_to_pbtxt(filename):
with gfile.FastGFile(filename,'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
tf.train.write_graph(graph_def, 'pbtxt/', 'tiny-yolov2-trial3-test.pbtxt', as_text=True)
return
graphdef_to_pbtxt('tiny-yolov2-trial3-test.pb')
To use tf_text_xxx.py It is necessary to have .config file. I have only .cfg file from the tutorial above. For this reason I can not use those three function you reported. Am I doing something wrong?
Related
I am trying to implement face detection mentioned in the tutorial
http://docs.opencv.org/3.0-beta/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#cascade-classifier
I am using OpenCV 3.0 on Ubuntu 14.04.
I downloaed the cascade xml files from here
https://github.com/opencv/opencv/tree/master/data/haarcascades
When I compile the code it gives me this error message:
OpenCV Error: Parsing error (/...../haarcascade_frontalcatface.xml(5): Valid XML should start with '<?xml ...?>') in icvXMLParse, file /home/taleb/opencv3/opencv/modules/core/src/persistence.cpp, line 2220
terminate called after throwing an instance of 'cv::Exception'
what(): /home/taleb/opencv3/opencv/modules/core/src/persistence.cpp:2220: error: (-212) /home/taleb/pythonproject/test1/haarcascade_frontalcatface.xml(5): Valid XML should start with '<?xml ...?>' in function icvXMLParse
Any suggestion?
I found a couple of fixes in stack overflow and other websites. They are as follows:
Change the character encoding from UTF-8 to ANSI with Notepad++.
Previous answer:
convert_cascade is for cascades trained by haartraining application and it does not support a format of cascades trained by traincascade application.
To do this with traincascade, just run opencv_traincascade again with
the same "-data" but set "-numStages" to the point you want to
generate up to. The application will load the trained stages, realize
that there is required number of stages, write the result cascade in
xml and finish a work. Interrupting the process during a stage could
result in corrupt data, so if you're best off deleting the stage in
completion.
refrence: https://stackoverflow.com/a/25831423/5671364.
XML Standard states:
if no encoding declaration is present in the XML document (and no
external encoding declaration mechanism such as the HTTP header is
available), the assumed encoding of an XML document depends on the
presence of the Byte-Order-Mark (BOM).
There are 3 ways to fix this:
Let OpenCV just put the ´encoding="ASCII"´ tag into the top root XML
tag.
Leave the top root XML tag, but encode everything as UTF-8
before writing it to file.
Do something else, with Byte-Order-Mark,
but keep it to the standard.
refrence: http://code.opencv.org/issues/976
I used the xgboost R package to train a model. I want to make predictions in a C/C++ environment. I succeeded saving the trained model from R and loading it in my C code.
I want to test this code by saving the test data I used in R (as a DMatrix), and loading it back into my C program, and do the prediction.
In R I used the xgb.Dmatrix.save() command to save the test data to file. my C code looks like that:
DMatrixHandle d = 0;
int y = XGDMatrixCreateFromFile("test_data.DMatrix",1,&d);
This code compiles, but fails at runtime with the following error:
dmlc-core/include/dmlc/logging.h:245: [13:57:27] src/data/data.cc:51: Check failed: (version) == (kVersion) MetaInfo: invalid format
Any suggestions about how to tell xgboost to save/load things in the right format?
Any clue would be helpful.
I've been modifying an example C++ program from the Caffe deep learning library and I noticed this code on line 234 that doesn't appear to be referenced again.
::google::InitGoogleLogging(argv[0]);
The argument provided is a prototxt file which defines the parameters of the deep learning model I'm calling. The thing that is confusing me is where the results from this line go? I know they end up being used in the program because if I make a mistake in the prototxt file then the program will crash. However I'm struggling to see how the data is passed to the class performing the classification tasks.
First of all, argv[0] is not the first argument you pass to your executable, but rather the executable name. So you are passing to ::google::InitGoogleLogging the executable name and not the prototxt file.
'glog' module (google logging) is using this name to decorate the log entries it outputs.
Second, caffe is using google logging (aka 'glog') as its logging module, and hence this module must be initialized once when running caffe. This is why you have this
::google::InitGoogleLogging(argv[0]);
in your code.
I'm trying to set up the sorl-thumbnail django app to provide thumbnails of pdf-files for a web site - running on Windows Server 2008 R2 with Appache web server.
I've had sorl-thumbnail functional with the PIL backend for thumbnail generation of jpeg images - which was working fine.
Since PIL cannot read pdf-files I wanted to switch to the graphicsmagick backend.
I've installed and tested the graphicsmagick/ghostscript combination. From the command line
gm convert foo.pdf -resize 400x400 bar.jpg
generates the expected jpg thumbnail. It also works for jpg to jpg thumbnail generation.
However, when called from sorl-thumbnail, ghostscript crashes.
From django python shell (python manage.py shell) I use the low-level command described in the sorl docs and pass in a FieldFile instance (ff) pointing to foo.pdf and get the following error:
In [8]: im = get_thumbnail(ff, '400x400', quality=95)
**** Warning: stream operator isn't terminated by valid EOL.
**** Warning: stream Length incorrect.
**** Warning: An error occurred while reading an XREF table.
**** The file has been damaged. This may have been caused
**** by a problem while converting or transfering the file.
**** Ghostscript will attempt to recover the data.
**** Error: Trailer is not found.
GPL Ghostscript 9.07: Unrecoverable error, exit code 1
Note that ff is pointing to the same file that converts fine when using gm convert from command line.
I've tried also passing an ImageFieldFile instance (iff) and get the following error:
In [5]: im = get_thumbnail(iff, '400x400', quality=95)
identify.exe: Corrupt JPEG data: 1 extraneous bytes before marker 0xdb `c:\users\thin\appdata\local\temp\tmpxs7m5p' # warning/jpeg.c/JPEGWarningHandler/348.
identify.exe: Corrupt JPEG data: 1 extraneous bytes before marker 0xc4 `c:\users\thin\appdata\local\temp\tmpxs7m5p' # warning/jpeg.c/JPEGWarningHandler/348.
identify.exe: Corrupt JPEG data: 1 extraneous bytes before marker 0xda `c:\users\thin\appdata\local\temp\tmpxs7m5p' # warning/jpeg.c/JPEGWarningHandler/348.
Invalid Parameter - -auto-orient
Changing back sorl settings to use the default PIL backend and repeating the command for jpg to jpg conversion, the thumbnail image is generated without errors/warnings and available through the cache.
It seems that sorl is copying the source file to a temporary file before passing it to gm - and that the problem originates in this copy operation.
I've found what I believe to be the copy operation in the sources of sorl_thumbnail-11.12-py2.7.egg\sorl\thumbnail\engines\convert_engine.py lines 47-55:
class Engine(EngineBase):
...
def get_image(self, source):
"""
Returns the backend image objects from a ImageFile instance
"""
handle, tmp = mkstemp()
with open(tmp, 'w') as fp:
fp.write(source.read())
os.close(handle)
return {'source': tmp, 'options': SortedDict(), 'size': None}
Could the problem be here - I don't see it!
Any suggestions of how to overcome this problem would be greatly appreciated!
I'm using django 1.4, sorl-thumbnail 11.12 with memcached and ghostscript 9.07.
After some trial and error, I found that the problem could be solved by changing the write mode from 'w' to 'wb', so that the sources of sorl_thumbnail-11.12-py2.7.egg\sorl\thumbnail\engines\convert_engine.py lines 47-55 now read:
class Engine(EngineBase):
...
def get_image(self, source):
"""
Returns the backend image objects from a ImageFile instance
"""
handle, tmp = mkstemp()
with open(tmp, 'wb') as fp:
fp.write(source.read())
os.close(handle)
return {'source': tmp, 'options': SortedDict(), 'size': None}
There are I believe two other locations in the convert_engine.py file, where the same change should be made.
After that, the gm convert command was able to process the file.
However, since my pdf's are fairly large multipage pdf's I then ran into other problems, the most important being that the get_image method makes a full copy of the file before the thumbnail is generated. With filesizes around 50 Mb it therefore turns out to be a very slow process, and finally I've opted for bypassing sorl and calling gm directly. The thumbnail is then stored in a standard ImageField. Not so elegant, but much faster.
I have read through the manual and I cannot find the answer. Given a magnet link I would like to generate a torrent file so that it can be loaded on the next startup to avoid redownloading the metadata. I have tried the fast resume feature, but I still have to fetch meta data when I do it and that can take quite a bit of time. Examples that I have seen are for creating torrent files for a new torrent, where as I would like to create one matching a magnet uri.
Solution found here:
http://code.google.com/p/libtorrent/issues/detail?id=165#c5
See creating torrent:
http://www.rasterbar.com/products/libtorrent/make_torrent.html
Modify first lines:
file_storage fs;
// recursively adds files in directories
add_files(fs, "./my_torrent");
create_torrent t(fs);
To this:
torrent_info ti = handle.get_torrent_info()
create_torrent t(ti)
"handle" is from here:
torrent_handle add_magnet_uri(session& ses, std::string const& uri add_torrent_params p);
Also before creating torrent you have to make sure that metadata has been downloaded, do this by calling handle.has_metadata().
UPDATE
Seems like libtorrent python api is missing some of important c++ api that is required to create torrent from magnets, the example above won't work in python cause create_torrent python class does not accept torrent_info as parameter (c++ has it available).
So I tried it another way, but also encountered a brick wall that makes it impossible, here is the code:
if handle.has_metadata():
torinfo = handle.get_torrent_info()
fs = libtorrent.file_storage()
for file in torinfo.files():
fs.add_file(file)
torfile = libtorrent.create_torrent(fs)
torfile.set_comment(torinfo.comment())
torfile.set_creator(torinfo.creator())
for i in xrange(0, torinfo.num_pieces()):
hash = torinfo.hash_for_piece(i)
torfile.set_hash(i, hash)
for url_seed in torinfo.url_seeds():
torfile.add_url_seed(url_seed)
for http_seed in torinfo.http_seeds():
torfile.add_http_seed(http_seed)
for node in torinfo.nodes():
torfile.add_node(node)
for tracker in torinfo.trackers():
torfile.add_tracker(tracker)
torfile.set_priv(torinfo.priv())
f = open(magnet_torrent, "wb")
f.write(libtorrent.bencode(torfile.generate()))
f.close()
There is an error thrown on this line:
torfile.set_hash(i, hash)
It expects hash to be const char* but torrent_info.hash_for_piece(int) returns class big_number which has no api to convert it back to const char*.
When I find some time I will report this missing api bug to libtorrent developers, as currently it is impossible to create a .torrent file from a magnet uri when using python bindings.
torrent_info.orig_files() is also missing in python bindings, I'm not sure whether torrent_info.files() is sufficient.
UPDATE 2
I've created an issue on this, see it here:
http://code.google.com/p/libtorrent/issues/detail?id=294
Star it so they fix it fast.
UPDATE 3
It is fixed now, there is a 0.16.0 release. Binaries for windows are also available.
Just wanted to provide a quick update using the modern libtorrent Python package: libtorrent now has the parse_magnet_uri method which you can use to generate a torrent handle:
import libtorrent, os, time
def magnet_to_torrent(magnet_uri, dst):
"""
Args:
magnet_uri (str): magnet link to convert to torrent file
dst (str): path to the destination folder where the torrent will be saved
"""
# Parse magnet URI parameters
params = libtorrent.parse_magnet_uri(magnet_uri)
# Download torrent info
session = libtorrent.session()
handle = session.add_torrent(params)
print "Downloading metadata..."
while not handle.has_metadata():
time.sleep(0.1)
# Create torrent and save to file
torrent_info = handle.get_torrent_info()
torrent_file = libtorrent.create_torrent(torrent_info)
torrent_path = os.path.join(dst, torrent_info.name() + ".torrent")
with open(torrent_path, "wb") as f:
f.write(libtorrent.bencode(torrent_file.generate()))
print "Torrent saved to %s" % torrent_path
If saving the resume data didn't work for you, you are able to generate a new torrent file using the information from the existing connection.
fs = libtorrent.file_storage()
libtorrent.add_files(fs, "somefiles")
t = libtorrent.create_torrent(fs)
t.add_tracker("http://10.0.0.1:312/announce")
t.set_creator("My Torrent")
t.set_comment("Some comments")
t.set_priv(True)
libtorrent.set_piece_hashes(t, "C:\\", lambda x: 0), libtorrent.bencode(t.generate())
f=open("mytorrent.torrent", "wb")
f.write(libtorrent.bencode(t.generate()))
f.close()
I doubt that it'll make the resume faster than the function built specifically for this purpose.
Try to see this code http://code.google.com/p/libtorrent/issues/attachmentText?id=165&aid=-5595452662388837431&name=java_client.cpp&token=km_XkD5NBdXitTaBwtCir8bN-1U%3A1327784186190
it uses add_magnet_uri which I think is what you need