I am trying to use SVM classifier in Weka. I downloaded weka-3-7-13 version. When i click on the classifier tab, SVM is not in the list.
How to use SVM in this tool? Please help me to overcome this problem.
In Weka (GUI) go to Tools -> PackageManager and install LibSVM/LibLinear (both are SVM).
One more implementation of SVM is 'SMO' which is in Classify -> Classifier -> Functions. (if not listed then install as mentioned above)
Alternatively you can use .jar files of these algorithms and use through your java code.
SVM classes are not integrated with the WEKA vanilla. you have to add LIBSVM library (jar file ) manually into your project to get SVM classifiers
Related
How do I access the OpenCV extended image process module? I need one filter specifically: fastGlobalSmootherFilter.
I have OpenCV 3.2.0 incorporated into my C++ project. I'm looking for this method:
http://docs.opencv.org/master/da/d17/group__ximgproc__filters.html#gaf8673fe9147160ad96ac6053fac3c106
which is in this module:
http://docs.opencv.org/master/df/d2d/group__ximgproc.html.
I found it through the research page here:
https://sites.google.com/site/globalsmoothing/
I've tried searching through the OpenCV header files, but none reference this function. I can't find edge_filter.hpp which is supposed to house some of these filters. How does one actually call the method?
At OpenCV developer site, you will find instructions for building the contrib modules (or extras)
https://github.com/opencv/opencv_contrib/blob/master/README.md
By default, they are not included in your build.
I can't find the file english.conll.4class.caseless.distsim.crf.ser.gz from the zip file downloaded from http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip .Can anyone please tell me how to get that caseless classfier from Stanford CoreNLP?
I don't think they are giving a direct gz files for caseless but are deriving via a makefile script, I checked in linux versions aswell and its not available there too, and somehow they are building it via truecaser it seems, While I dont totally understand the mechanism, below is a pointer, where I see the references in stanford core nlp git hub.
https://github.com/stanfordnlp/CoreNLP/blob/d558d95d80b36b5b45bc21882cbc0ef7452eda24/scripts/ner/Makefile
You can search for "english.conll.4class.caseless.distsim.crf.ser.gz" in corenlp github for more pointers about it.
FYI.. you can also look at older versions, as its mentioned in the doc that they have provided them seperately.
For those who face the same problem;
Download model jar from https://stanfordnlp.github.io/CoreNLP/index.html#download (There is a table that lists different models for different languages) and open/extract the jar content(e.g I used WinRar) then go to edu/stanford/nlp/models/ner directory you can find the ser.gz files for any model.
I have a big C++ program built with Automake and it would be a very big hassle (practically nearly impossible given my time constraints) to convert it to use the Bazel build system. Is there any way I can use a TensorFlow trained model (deep convolutional net) within my program to make predictions (I don't need to do learning within the program right now, but it would be really cool if that can also be done)? Like using TensorFlow as a library?
Thanks!
TensorFlow has a pretty narrow C API exported in the c_api.h header file. The library file can be produced by building the libtensorflow.so target. The C API has everything you require to load a pre-trained model and run inference on it to make predictions. You don't need to convert your build system to Bazel. All you need is to use Bazel to build //tensorflow:libtensorflow.so target, and copy the libtensorflow.so and c_api.h to where you see fit.
Are there any Plugins out there for Photoshop to edit/create KTX Files? In Photoshop there are dds Plugins but i couldn't find any tools rather than this one: http://www.youtube.com/watch?v=oZs21xda9lY which is a custom made one.
There aren't any official ones, only open source and custom made ones. http://www.khronos.org/opengles/sdk/tools/KTX/ is another one that I've found.
I am using openCV 2.4.7 with C++ to build an application which will eventually be distributed. As far as I understand, openCV falls under the BSD open source license.
However, I found that there is a package called features2d which has a class called MSER which uses a table called "chitab3". This table is extracted from a paper which is under GPL. This is present in the source code of modules/features2d/src/mser.cpp as follows:
The color image algorithm is taken from: Maximally Stable Colour Regions for Recognition and Match;
it should be much slower than grey image method ( 3~4 times );
the chi_table.h file is taken directly from paper's source code which is distributed under GPL.
Since the MSER class is available in features2d, when features2d.dll is distributed so is MSER and eventually chitab3 as well.
All this led to the following questions:
What would be the best practice to prevent the usage of chitab3? I have no use for the MSER class but need the features2d.dll as it has other modules required for the application.
If chitab3 is under GPL, even MSER, features2d and openCV should be under GPL. Why is openCV under BSD although one of it's modules is under GPL?
You should report this issue directly to the OpenCV team to make them aware of it.
For your application, you can simply recompile OpenCV from the sources after moving MSER to the non-free OpenCV module, and explicitly disabling the non-free module in the build system. Then, the dll that you ship does not contain any data/code that cannot be used at your own convenience.