I want to do transfer learning in YOLOv3 in Darknet so I want to use the pre-trained model of YOLOv3 that was trained on COCO dataset and then further train it on my own dataset to detect additional objects. So what are the steps that I should do? How can I label my data so that it can be used in Darknet? Please help me because it's the first time that I use Darknet and YOLO.
It's all explained here: https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects
Note that notation must be consistent. Any missing annotated object will result in a bad learning and so a bad prediction.
This question was answered in "Fine-tuning and transfer learning by the example of YOLO" (Fine-tuning and transfer learning by the example of YOLO).
The answer given by gameon67, suggesting this:
If you are using AlexeyAB's darknet repo (not darkflow), he suggests
to do Fine-Tuning instead of Transfer Learning by setting this param
in cfg file : stopbackward=1 .
Then input ./darknet partial yourConfigFile.cfg
yourWeightsFile.weights outPutName.LastLayer# LastLayer# such as :
./darknet partial cfg/yolov3.cfg yolov3.weights yolov3.conv.81 81 It
will create yolov3.conv.81 and will freeze the lower layer, then you
can train by using weights file yolov3.conv.81 instead of original
darknet53.conv.74.
References : https://github.com/AlexeyAB/darknet#how-to-improve-object-detection
Related
I want to detect more objects than coco dataset which detects only 80 objects , I want to detect as many as possible actions also like hugging ,swimming.....etc.
I don't care about the size and I do not want to train myself ... So is there a dataset(weights) big enough already available that I can download and use or I do have to train and label for yolo?
You can find here a very huge dataset with bounding boxes!
What you are trying to classify is represented as Action Recognition. Here [1] is a good repo that lists a lot of out-of-the-box models for this task.
An explanation: Models (like YOLO) contain two main blocks: feature extraction (CNN stuff) and classification (linear layers). When training from scratch, both feature extraction and classification will be trained from scratch. It is easy to train classification to what you want, but it is hard to train the feature extraction part (as it takes a lot of time). Hence, we typically use pre-trained models on generalized datasets (like YOLO is trained on COCO), so our feature extraction part starts from a somewhat good generalized place. Later, we replace the classification part will our own to be trained from scratch for our task.
TL;DR, you can use a pre-trained YOLO model on COCO for your task by replacing the last linear layers to classify what you want. Here are some resources for different frameworks [2, 3].
Here [4] is a simple walkthrough of how to do this.
[1] https://github.com/jinwchoi/awesome-action-recognition/blob/master/README.md#action-recognition-and-video-understanding
[2] TensorFlow: https://www.tensorflow.org/tutorials/images/transfer_learning
[3] PyTorch: https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
[4] https://blog.roboflow.com/training-yolov4-on-a-custom-dataset/
I'm still very new to the world of machine learning and am looking for some guidance for how to continue a project that I've been working on. Right now I'm trying to feed in the Food-101 dataset into the Image Classification algorithm in SageMaker and later deploy this trained model onto an AWS deeplens to have food detection capabilities. Unfortunately the dataset comes with only the raw image files organized in sub folders as well as a .h5 file (not sure if I can just directly feed this file type into sageMaker?). From what I've gathered neither of these are suitable ways to feed in this dataset into SageMaker and I was wondering if anyone could help point me in the right direction of how I might be able to prepare the dataset properly for SageMaker i.e convert to a .rec or something else. Apologies if the scope of this question is very broad I am still a beginner to all of this and I'm simply stuck and do not know how to proceed so any help you guys might be able to provide would be fantastic. Thanks!
if you want to use the built-in algo for image classification, you can either use Image format or RecordIO format, re: https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html#IC-inputoutput
Image format is straightforward: just build a manifest file with the list of images. This could be an easy solution for you, since you already have images organized in folders.
RecordIO requires that you build files with the 'im2rec' tool, re: https://mxnet.incubator.apache.org/versions/master/faq/recordio.html.
Once your data set is ready, you should be able to adapt the sample notebooks available at https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms
I am currently developing an application for facial recognition.
The algorithms are implemented and trained using the MatConvnet library (http://www.vlfeat.org/matconvnet/). At the end, I have a Network (.mat file) which looks like that:
I would like to know if it were possible to extract the weights of the Network using its .mat file, write them in a XML file and read them with Caffe C++. I would like to reuse them in Caffe C++ in order to do some testing and hardware implementation. Is there an efficient and practical way to proceed so ?
Thank you for very much for your help.
The layer whose parameters you'd like to store, must be set as 'precious'. In net.var you can access the parameters and write them.
There is a conversion script that converts matconvnet models to caffe models here which you may find useful.
You can't use weights of the trained Network by matconvnet for caffe. You can merely import your model from matconvnet to caffe.(https://github.com/vlfeat/matconvnet/blob/4ce2871ec55f0d7deed1683eb5bd77a8a19a50cd/utils/import-caffe.py). But this script does not support all layers and you may have difficulties in employing it.
The best way is to define your caffe prototxt in python as the matconvnet model.
I am trying to extract features of a new data-set by using a pre-trained network like that one classify_image_graph_def.pb released by Google in the tensorflow (inception-2015-12-05.tgz). I was successful on that as there is tutorial at transfer_learning, which uses the classify_image_graph_def.pb (inception_v3.pb) to extract fractures of the new data-set.
However, in the new release of pre-trained models tensorflow provides check point files (ex. resnet_v1_152.ckpt) instead of Graph_def (ex. resnet_v1_152.pb). I was wondering how I could use these checkpoint files to extract features as in transfer_learning. Could anyone give me some directions?
Just follow the official model save/restore doc here.
I am implementing a facial expression recognition and am using SVM to classify given expression.
When I train, I use this command line
svm.train(myFeatureVector,myLabels,Mat(),Mat(), myParameters);
svm.save("myClassifier.yml");
which will later when I will predict using
response = svm.predict(incomingFeatureVector);
But then when I want to train more than once (exited the program and start again), it seems to have overwritten my previous svm file. Is there any way I could do read previous svm file and add more data into it (and then resave it ,etc) ? I looked up on this openCV documentation and found nothing. However, when I read on this page; there is a method called CvSVM::read. I don't know what that does/how to implement it.
Hope anyone can help me :(
What you are trying to do is incremental learning but unfortunately Support Vector Machines is a batch algorithm, hence if you want to add more data you have to retrain with the whole set again.
There are online learning alternatives, like Pegasos SVM but I am not aware of any that is implemented on OpenCV