I'm using the detectron2 with cascade rcnn.
I'm predicting 4 different classes. and dataset with around 6000 object boxes with training.
I used resnet 50 for backbone and got around 80 accuracy.
then I try it with resnet 101 back bone and it diverged with first iteration.
1. small dataset and big model easily diverged at the first iteration?
2. what heppen to small dataset and big back bone
3. And for my case augmentation + resnet50 backbone decrease the result why?
4. How should I fix this diverged problem?(what is the step next)
As we can see in the detectron2 model zoo for Cascade R-CNN, their pre-trained models only use ResNet50 as backbone (config 1 and config 2).
So the most likely reason your model diverges is that most of the parameters will need to be trained from scratch (which is much more difficult and will require more data) as there is no pre-trained ResNet101.
Please also note that using a larger model increases the risk of overfitting.
Related
I have a pose estimation model pretrained on a dataset in which hands are in its nartural color. I want to finetune that model on the dataset of hands of surgeons doing surgeries. Those hands are in surgical gloves so the image of the hands are a bit different than normal hands.
pretraine image
finetune image
Does this difference in hand colors affect the model performance?
If I can make images of those surgical hands more like normal hands, will I get better performance?
Well, it depends on what your pre-trained model has learned to capture from the pre-training (initial) dataset. Suppose your model had many feature maps and not enough skin color variation in your pre-training dataset (leads to overfitting issues). In that case, your model has likely "taken the path of least resistance" and exploited that to learn feature maps that rely on the color space as means of feature extraction (which might not generalize well due to color differences).
The more your pre-training dataset match/overlap with your target dataset, the better the effects of transfer learning will be. So yes, there is a very high chance that making your target dataset (surgical hands) look more similar to your pre-training dataset will positively impact your model's performance. Moreover, I would conjecture that introducing some color variation (e.g., Color Jitter augmentation) in your pre-training dataset could also help your model generalize to your target dataset.
I've created a number of models using Google AutoML and I want to make sure I'm interpreting the output data correctly. This if for a linear regression model predicting website conversion rates on any given day.
First the model gives a model feature importance when the model has completed training. This seems to tell me which feature was most important in predicting the target value but not necessarily if it contributes most to larger changes in that value?
Secondly, we have a bunch of local feature weights which I think tell me the contribution each feature has made to prediction. So say feature weight of bounce rate has a weight of -0.002 we can say that the bounce rate for that row decreased the prediction by 0.002? Is there a correct way to aggregate that, is it just the range?
I am working on the comparison of Histogram of oriented gradient (HoG) and Convolutional Neural Network (CNN) for the weed detection. I have two datasets of two different weeds.
CNN architecture is 3 layer network.
1) 1st dataset contains two classes and have 18 images.
The dataset is increased using data augmentation (rotation, adding noise, illumination changes)
Using the CNN I am getting a testing accuracy of 77% and for HoG with SVM 78%.
2) Second dataset contact leaves of two different plants.
each class contain 2500 images without data augmentation.
For this dataset, using CNN I am getting a test accuracy of 94% and for HoG with SVM 80%.
My question is Why I am getting higher accuracy for HoG using first dataset? CNN should be much better than HoG.
The only reason comes to my mind is the first data has only 18 images and less diverse as compare to the 2nd dataset. is it correct?
Yes, your intuition is right, having this small data set (just 18 images before data augmentation) can cause the worse performance. In general, for CNNs you usually need at least thousands of images. SVMs do not perform that bad because of the regularization (that you most probably use) and because of the probably much lower number of parameters the model has. There are ways how to regularize deep nets, e.g., with your first data set you might want to give dropout a try, but I would rather try to acquire a substantially larger data set.
Currently I am following the caffe imagenet example but apply it on my own training data set. My dataset is about 2000 classes and about 10 ~ 50 images each class. Actually I was classifying vehicle images and the images were cropped to the front, so the images within each class have the same size, the same view angle(almost).
I've tried the imagenet schema but looks like it didn't work well and after about 3000 iterations the accuracy was down to 0. So I am wondering is there a practical guide on how to tune the schema?
You can delete the last layer in imagenet, add your own last layer with a different name(to fit the number of classes), specify it with a higher learning rate, and specify a lower overall learning rate. There does exist an official example here: http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html
However, if the accuracy was 0 you should check the model parameters first, perhaps it's an overflow
This question is related to Object detection, and basically, detecting any "known" object. For an example, imagine I have the below objects.
Table
Bottle.
Camera
Car
I will take 4 photos from all of these individual object. One from left, another from right, and other 2 from up and down. I originally thought it is possible to recognize these objects with these 4 photos per each, because you have the photos in all 4 angles, no matter how you see the object you can detect it.
But I got confused with someones idea about training the engine with thousands of positive and negative images from each object. I really don't think this is required.
So simply speaking, my question is, in order to identify an object, do I need these thousands of positive and negative objects? Or else simply 4 photos from 4 angles is enough?
I am expecting to use OpenCV for this.
Update
Actually the main thing is something like this.. Imagine that I have 2 laptops. One is Dell and the other one is HP. Both are laptops but you know, they have clearly visible differences including the Logo. Can we do this using Feature Description? If not, how "hard" the "training" process? How many pics needed?
Update 2
I need to detect "specific" objects. Not all the cars, all the bottles etc. For an example, the "Maruti Car Model 123" and "Ferrari Car Model 234" are both cars but different. Imagine I have the pictures of Maruti and Ferrari of above mentioned models, then I need to detect them. I don't have to worry about other cars or vehicles, or even other models of Maruti and Ferrari. But the above mentioned "Maruti Car Model 123" should be identified as "Maruti Car Model 123" and above mentioned "Ferrari Car Model 234"should be identified as "Ferrari Car Model 234". How many pictures do I need for this?
Answers:
If you want to detect a specific object and you don't need to account for view point changes, you can use 2D features:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
To distinguish between 2 logos, you'll probably need to build a detector for each logo which will be trained on a set of images. For example, you can train a Haar cascade classifier.
To distinguish between different models of cars, you'll probably need to train a classifier using training images of each car. However, I encountered an application which does that using a nearest neighbor approach - it just extracts features from the given test image and compares it to a know set of images of difference car models.
Also, I can recommend some approaches and packages if you'll explain more on the application.
To answer the question you asked in the title, if you want to be able to determine what the object in the picture is you need a supervised algorithm (a.k.a. trained). Otherwise you would be able to determine, in some cases, the edges or the presence of an object, but not what kind of an object it is. In order to tell what the object is you need a labelled training set.
Regarding the contents of the question, the number of possible angles in a picture of an object is infinite. If you just have four pictures in your training set, the test example could be taken in an angle that falls halfway between training example A and training example B, making it hard to recognize for your algorithm. The larger the training set the higher the probability of recognizing the object. Be careful: you never reach the absolute certainty that your algorithm will recognize the object. It just becomes more likely.