Export Weka Fiji plugin segmentation as ROI - weka

I am using Imagej/Fiji Weka plugin for classifying different regions of the image and I would like to obtain those regions for further processing… (as Rois? a table with the classes?)
So in the end I want to get the area for each of the segmented classes.. I need to script this task as it is part of a pipeline… But not sure how to do extract the results from Weka Imagej Plugin…
Actually this code example is just to illustrate the idea.. Anyone has a basic example on applying a classifier and exporting results?
Thank you in advance
import trainableSegmentation.*;
import ij.IJ;
import ij.ImagePlus;
// input train image
input = IJ.openImage( someImage.jpg" );
// create Weka Segmentation object
segmentator = new WekaSegmentation( input );
// load classifier from file
segmentator.loadClassifier( "classifier.model" );
result = segmentator.applyClassifier( input );
...

You have a couple of answers about that in the image.sc forum: https://forum.image.sc/t/measuring-area-fraction-in-a-weka-classifed-image/7979
https://forum.image.sc/t/quantifing-weka-output/4660/6
That being said, you can also directly use the resulting label image for measurement within other plugins, such as the ones from MorphoLibJ (after installation, under Plugins > MorphoLibJ > Analyze).

Related

Approach to get the weight values from the pre-trained weights from Darknet?

I'm currently trying to implement YOLOv3 object detection model in C(only detection, not training).
I have tested my convolution method with arbitrary values and it seems to be working as I expected.
Before stacking up multiple method calls to do forward propagation, I thought it would be safe to test with the actual pretrained weight file data.
When I look up Darknet's pre-trained weight file, it was a huge chunk of binary files. I tried to convert it to hex and decimals, but it still doesn't look simple to pinpoint what part of values to use.
So, my question is, what should I do to extract the decimal numbers of the weights or the filter values so that I can use them in the same order of the forward propagation happening in YOLOv3?
*I'm currently trying to build my c version of YOLOv3 using the structure image shown in https://www.itread01.com/content/1541167345.html
*My c code will be run on an FPGA board called MicroZed, along with other HDL code.
*I tried to plug some printf functions into some places of Darknet code to see what kinds of data are moving around when YOLOv3 runs, however, when I ran it on in Linux terminal, it didn't show anything new and kept outputting the same results.
Any help or advice will be really appreciated. Thank you!
I am not too sure if there is a direct way to read darknet weights, but you can convert it into .h5 format and obtain the weight values from it
You can convert the darknet yolov3 weights into .h5 format (used by keras) by using the appropriate command from this repository.
You can choose the command based on your Yolo version from the list shown in the ReadMe of the linked repo. For the standard yolov3, the command for converting is
python tools/model_converter/convert.py cfg/yolov3.cfg weights/yolov3.weights weights/yolov3.h5
Once you have the .h5weights, you can use the below code snippet for obtaining the
values from the weights. credit/source
import h5py
path = "<path to weights>.h5"
weights = {}
keys = []
with h5py.File(path, 'r') as f: # open file
f.visit(keys.append) # append all keys to list
for key in keys:
if ':' in key: # contains data if ':' in key
param_name = f[key].name
weights[f[key].name] = f[key].value
print(param_name,weights[f[key].name])

Can we give the test data, without labelling them?

I came across this snippet in the Tensorflow documentation, MNIST For ML Beginners.
eval_data = mnist.test.images # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
Now, I want to feed my own test images, without labelling them and would like the model to predict the labels, how do I achieve this?
Yes you can, but it would not be deep learning instead it would be clustering. ( Ex: K means Clustering )
Basic idea is like the following:
Create two placeholders for input and centroids
Decide a distance metric
Create graph
feed only dataset to run the graph

bokeh - plotting shapefile map using datashader

Initially, I created an interactive map of the UK Postcode area where an individual area is color represented based on its value (e.g. population in that post code area) as following.
from bokeh.plotting import figure
from bokeh.palettes import Viridis256 as palette
from bokeh.models import LinearColorMapper
from bokeh.models import ColumnDataSource
import geopandas as gpd
shp = 'file_path_to_the_downloaded_shapefile'
#read shape file into dataframe using geopandas
df = gpd.read_file(shp)
def expandMultiPolygons(row, geometry):
if row[geometry].type = 'MultiPolygon':
row[geometry] = [p for p in row[geometry]]
return row
#Some rows were in MultiPolygons instead of Polygons.
#Expand MultiPolygons to multi rows of Polygons
df = df.apply(expandMultiPolygons, geometry='geometry', axis=1)
df = df.set_index('Area')['geometry'].apply(pd.Series).stack().reset_index()
#Visualize the polygons. To visualize different colors for different post areas, I added another column called 'value' which has some random integer value.
p = figure()
color_mapper = LinearColorMapper(palette=palette)
source = ColumnDataSource(df)
p.patches('x', 'y', source=source,\
fill_color={'field': 'value', 'transform': color_mapper},\
fill_alpha=1.0, line_color="black", line_width=0.05)
where df is a dataframe of four columns : post code area, x-coordinate, y-coordinate, value (i.e. population).
The above code creates an interactive map on a web browser which is great but I noticed the interactivity is not very smooth in speed. If I zoom in or move the map, it renders slowly. The size of the dataframe is only 1106 rows, so I'm quite confused why it is so slow.
As one of the possible solutions, I came across with datashader (https://datashader.readthedocs.io/en/latest/) but I find the example script is quite complicated and most of them are with holoview package on Jupyter notebook but I want to create a dashboard using bokeh.
Does anyone advise me in incorporating datashader into the above bokeh script? Do I need a different function within datashader to create the shape map instead of using bokeh's patches function?
Any suggestion would be highly appreciated!!!
Without the data file involved, I can't answer your question directly, but can offer some observations:
Datashader is unlikely to be of value for this purpose, because datashader does not currently have any support for rendering polygons. As a rule of thumb, Datashader is designed to aggregate your data, and if it's already aggregated, Datashader won't normally be of help. Here your data is aggregated by postcode, which datashader can't process, but if you had the original data per person it would be happy to render it.
If you prefer working with Bokeh directly rather than via the higher-level HoloViews/GeoViews interface, I'd recommend folllwing Matt Rocklin's work on accelerating geopandas; his approach should be very fast for your purpose.
All that said, HoloViews, and GeoViews should be a convenient way to work with Bokeh in general, whether or not you want to create a dashboard. E.g. the 2017 JupyterCon tutorial shows how to make a simple Bokeh dashboard using both libraries. It doesn't cover shape files, but those are covered in other GeoViews examples.
As mentioned in my comment, I believe that the complexity of your polygons might cause your problem. The file you linked to contains several shapefile of different sizes and complexities. You can simplify those, i.e. reduce the number of points for each polygon. This can change how they look. It can range from almost no difference over a bit more "edginess" to an angular appearance. This depends on the level of simplification you chose. Depending on your needs you can chose different levels of simplicity.
I know of three easy options to get this done:
GUI: Try QGis. It is a great opensource tool for geospatial data processing. Load your Shapefile as a new layer. Then use the "Simplify Geometries" tool under the Vector menu.
Command-Line: GDAL is an open-source library. It comes with an useful command-line tool. You can use it like this: ogr2ogr outfile.shp infile.shp -simplify 0.000001
Online: Visit mapshader. Import your file. Select simplify and chose your level. Then, export the result. What I really like here is that your file is rendered instantly. Hence, you can immediately see the result of your simplification.
Other than that, you should also update your bokeh version. It gets updated regularly and there have been some performance improvements since.
Using HoloViews or GeoViews will not positively affect your performance. Thus, it is not related to your issues. I guess #James A. Bednar was just giving some side advice there.
I found a way to speed up the interactive visualization of the UK map as I move the slider.
I created individual image (in 2D) for a different value of slider first and updated the map using the 2D images instead of using bokeh's patches function.
Since the images are in array format, it is much faster to update the image while changing the values in the slider. one downside in this method is that I can no longer use hover function on the UK map.
I referred to the following url to convert polygon information into arrays: https://gist.github.com/brendancol/db030013e981c46acb2886060dde607e#file-rasterio_datashader_polygons-py-L35

Monitor training/validation process in Caffe

I'm training Caffe Reference Model for classifying images.
My work requires me to monitor the training process by drawing graph of accuracy of the model after every 1000 iterations on entire training set and validation set which has 100K and 50K images respectively.
Right now, Im taking the naive approach, make snapshots after every 1000 iterations, run the C++ classififcation code which reads raw JPEG image and forward to the net and output the predicted labels. However, this takes too much time on my machine (with a Geforce GTX 560 Ti)
Is there any faster way that I can do to have the graph of accuracy of the snapshot models on both training and validation sets?
I was thinking about using LMDB format instead of raw images. However, I cannot find documentation/code about doing classification in C++ using LMDB format.
1) You can use the NVIDIA-DIGITS app to monitor your networks. They provide a GUI including dataset preparation, model selection, and learning curve visualization. More, they use a caffe distribution allowing multi-GPU training.
2) Or, you can simply use the log-parser inside caffe.
/pathtocaffe/build/tools/caffe train --solver=solver.prototxt 2>&1 | tee lenet_train.log
This allows you to save train log into "lenet_train.log". Then by using:
python /pathtocaffe/tools/extra/parse_log.py lenet_train.log .
you parse your train log into two csv files, containing train and test loss. You can then plot them using the following python script
import pandas as pd
from matplotlib import *
from matplotlib.pyplot import *
train_log = pd.read_csv("./lenet_train.log.train")
test_log = pd.read_csv("./lenet_train.log.test")
_, ax1 = subplots(figsize=(15, 10))
ax2 = ax1.twinx()
ax1.plot(train_log["NumIters"], train_log["loss"], alpha=0.4)
ax1.plot(test_log["NumIters"], test_log["loss"], 'g')
ax2.plot(test_log["NumIters"], test_log["acc"], 'r')
ax1.set_xlabel('iteration')
ax1.set_ylabel('train loss')
ax2.set_ylabel('test accuracy')
savefig("./train_test_image.png") #save image as png
Caffe creates logs each time you try to train something, and its located in the tmp folder (both linux and windows).
I also wrote a plotting script in python which you can easily use to visualize your loss/accuracy.
Just place your training logs with .log extension next to the script and double click on it.
You can use command prompts as well, but for ease of use, when executed it loads all logs (*.log) it can find in the current directory.
it also shows the top 4 accuracies and at-which accuracy they were achieved.
you can find it here : https://gist.github.com/Coderx7/03f46cb24dcf4127d6fa66d08126fa3b
python /pathtocaffe/tools/extra/parse_log.py lenet_train.log
command produces the following error:
usage: parse_log.py [-h] [--verbose] [--delimiter DELIMITER]
logfile_path output_dir
parse_log.py: error: too few arguments
Solution:
For successful execution of "parse_log.py" command, we should pass the two arguments:
log file
path of output directory
So the correct command is as follows:
python /pathtocaffe/tools/extra/parse_log.py lenet_train.log output_dir

Classifying one instance in Weka using NaiveBayes Classifier

I was wondering if there's a way to train the model using Naive Bayes, and then apply that to a single record. I'm new to weka so I dont know if this is possible. Also, is there a way to store the classifier output in a file?
The answer is yes since Naive Bayes is a model based on simple probabilistic Bayes theorem that can be used for classification challenges.
For classification using Naive Bayes, and other classifiers, you need to first train the model with a sample dataset, once trained the model can be applied to any record.
Of course there will be always an error probability when using this approach, but that depends mostly on the quality of your sample and the properties of your data set.
I haven't used Weka directly, but as an extension for Rapid Miner, but the principles must apply. Once the model is trained you should be able to see/print the model parameters.
I am currently searching for the same answer, while using java.
I created an arff file, which contains training date and used the programm http://weka.wikispaces.com/file/view/WekaDemo.java as an example to train and evaluate the classifer.
I still need to figure out, howto save and load a model in java and (more importantly) how to test against a single record.
WekaDemo.java
...
public void execute() throws Exception {
// run filter
m_Filter.setInputFormat(m_Training);
Instances filtered = Filter.useFilter(m_Training, m_Filter);
// train classifier on complete file for tree
m_Classifier.buildClassifier(filtered);
// 10fold CV with seed=1
m_Evaluation = new Evaluation(filtered);
m_Evaluation.crossValidateModel(
m_Classifier, filtered, 10, m_Training.getRandomNumberGenerator(1));
//TODO Save model
//TODO Load model
//TODO Test against a single information
}
...
Edit 1:
Save and loading a model is explained here: How to test existing model with new instance in weka, using java code?
In http://weka.wikispaces.com/Use+WEKA+in+your+Java+code#Classification-Classifying%20instances there is a quick how to for classifying a single instance.
//load model (saved from user interface)
Classifier tree = (Classifier) weka.core.SerializationHelper.read("/some/where/j48.model");
// load unlabeled data
Instances unlabeled = new Instances( new BufferedReader(new FileReader("/some/where/unlabeled.arff")));
// set class attribute
unlabeled.setClassIndex(unlabeled.numAttributes() - 1);
// create copy
Instances labeled = new Instances(unlabeled);
// label instances
for (int i = 0; i < unlabeled.numInstances(); i++) {
double clsLabel = tree.classifyInstance(unlabeled.instance(i));
labeled.instance(i).setClassValue(clsLabel);
System.out.println(clsLabel + " -> " + unlabeled.classAttribute().value((int) clsLabel));
double[] dist =tree.distributionForInstance(unlabeled.instance(i))
for(int j=0; j<dist.length;j++){
System.print(unlabeled.classAttribute().value(j)+": " +dist[j]);
}
}
Edit This method doesn't train, evaluate and save a model. This is something I usually do using the weka gui. ( http://weka.wikispaces.com/Serialization )
This method uses a tree type model in the example with a nominal class, but that should be easily converted to a Naive Bayes example.