Why is my cDCGAN model behaving like this? - computer-vision

Training a cDCGAN model using PyTorch to generate Covid X-ray images at a resolution of 128x128. Currently limited to 2 classes over 500 epochs while developing.
Actual Losses Why does it behave like this?
Results
Class 0 - 6012 samples
Class 1 - 10,912 samples
Expected losses (testing with a Rock Paper Scissors dataset)
Results with Rock Paper Scissors Dataset
This is the behaviour of the same script trained on the rock paper scissors dataset, why is the model behaving so differently for the Covid Xray image dataset?
840 samples for each class

Related

Yolo high mAP but fairly low confidence

I am currently working on a project which makes use of YOLOv5. I have trained my model for 100 epochs on a 5000+ dataset of images and got a good mAP of 0.95. My question came up when I tried to detect objects using the detect.py with the trained weights I got fairly low confidence in detection about 0.30 to 0.70 on certain objects. Should I train my model for more epochs to better my confidence? and does high mAP not result in high confidence?

about bounded boxes of objects

i'm trying to compose a dataset for the detection of soccer players, ball etc.. in a soccer game, i'm using alexeyAB Darknet framework,
So in the labeling phase in each image there are at least 8 players a ball and other stuff, at some point it is logical to think about the fact that i will have enough instances per player, but not enough for the ball and the goalkeeper for example,
so can i only marking bounded boxes the ball and other things avoiding to do it with the players to not waste time?
If you are training the model on your own dataset, I would recommend limiting the number of labels/classes in your data to what you seek. For example if you only want your model to see balls, goal-posts and Not players, simply keep the classes as balls and goal-posts. (This reminds me of a classification problem where 0 stands for balls and 1 stands for goal-post). P.S you mentioned object detection and Not Localization, which is the purpose of the YOLO models.

Creating training set from multiple images in Haar Cascade

I am currently working on detecting multiple fruits in a given image. For example, the given image can have fruits like bananas (like yellow, red and green), mangoes, oranges,etc. I was able to create training set with only one image at a time using opencv_createsamples.
Sample Code:
C:\opencv\build\x64\vc14\bin\opencv_createsamples.exe -img redbanana.jpg -bg bg.txt -info info/info.lst -pngoutput info -maxxangle 0.5 -maxyangle 0.5 -maxzangle 0.5 -num 100
Similarly I have done for around 5 fruits, which creates separate vec file for each fruit. Its hard to create for each fruit. Is there any possibility for creating training set from multiple images with a single vec file as an output?
Is there are any other methodology to detect multiple fruits in a given image?
A haar-classifier is ideally suited to detecting one class of similar looking objects quickly as outlined in the opencv documentation http:// docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html. For example, the opencv repository (https:// github.com/opencv/opencv) has a list of classifiers (https:// github.com/opencv/opencv/tree/master/data/haarcascades) trained for specific classes of objects.
Unless the objects to be detected are similar (like faces with different features or cars of different makes and models) training would be more effective with a classifier per fruit - e.g., bananas, oranges, mangoes etc.,.
To create a training vector based on multiple positive sample images (and for any other aspect of haar-classifier training I'd recommend the steps here - steps 5 and 6 - and the details covered at http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html. In your case the positive images should include all types of bananas, oranges, mangoes etc including variation in color etc.,.
If you want to train the classifier with different variations of the same fruit, you can generate training samples from multiple images as described here.
However, do note that Haar classifiers work in greyscale and it is difficult to guarantee differentiation between objects like red and yellow bananas.
If you want multiple classes in one classifier, I recommend YOLO (You Only Look Once) or SSD (Single Shot multibox Detector).

Robust algorithm for detecting vehicles before stop line

I need to write a program that use a camera to detect presence of a vehicle inside a determined region on the road before stop line on the intersection (just like an inductive loop). Outputs will true or false based on the visibility of a vehicle on that region. Camera can be installed perpendicular to the road or above the road. Currently I need an algorithm.
The following image is a sample implementation in order to detect vehicles in the intersection:
After some study in this field I realized this technique is background subtraction, the program model background and when a vehicle got inside the area, it will be detected. But the definition says it detect moving vehicles so what if cars stop on the sensor 50%-60% of times(when signal lights becomes red)? Will they become background? Are they detected all the times?
I've seen some algorithm in the BS field, like Mixture of Gaussian, but doubt they work in the real situation because of the above problem.
Currently I program some method like averaging using OpenCV under Linux. Program calculate pixels average inside that rectangle and save this value inside a buffer, calculate mode and compare with current frame. But there are problems like vehicles lights at night, vehicles shadow in day, and stopping cars on my sensor because of red signal.
I would like to recommend better detection of vehicles than separate the foreground from the background. There is much lights conditions problems and it is old fashion.
In opencv you can use for example haar cascade or LBP for the fast and simple detection of vehicles. In opencv 3.1 there are 2 utilily for learning the detector.
To use detector is simple.
Same as
In this tutorial
There is also some sources on web where you can download already pretrained cascade for car detection.
Code in detection Opencv is simple on and easy to understand
You can find the examples on my blog. Also I have one car dataset containing 2000 car positive samples. This samples just list in bash into the list of positiva samples and use utility to create sample and traincascade. LBP cascades are little bit faster with comparable performance..
I learned cascade on windows also under Linux.. The diference is about the run the program. Also the training (vec.vec bg.dat data have to be prepared in create samples utility.. If you have dataset the prepare the training takes 20 minutes. Problem is where to find data. I got dataset on my blog. Also try to understand the script. My -w 32 -h 64 parameters are for people detection. Dor Car is better something like -w 32 -h 32.
./opencv_traincascade and parameters
opencv_traincascade.exe -data v5 -vec vec.vec -bg bg.dat -numPos 540 -numNeg 700 -numStages 11 numThreads 4 -stageType BOOST -featureType LBP -w 32 -h 64 -minHitRate 0.999995 -maxFalseAlarmRate 0.2 -maxDepth 10 -maxWeakCount 120 -mode ALL
I also collect some dataset to train the detector..
You can download the dataset also from Dataset

MLP: Relation between number of iteration and accuracy

I would like to ask somebody for advice. I created programme in C++ where I am using OpenCV library (v2.4.11), especially MLP classifier.
I have had accuracy on 2 000 testing screens about 92% but only when I set number of iterations on 1. With bigger numbers like 100, 1000 it is getting worse (on 100 it is 78%, on 1000 77%).
It is possible that problem is in data model and programming part is correct? Or it has to be my fault?
Thank you very much.
It is possible that problem is in data model and programming part is
correct?
Yes, the number of iterations, like the number of neurons and the number of layers, is one of the parameters that has great impact on the overall performance of the MLP classifier. The more iterations you apply to the MLP training the more the MLP NN adapts/fits to its training data. This leads to high performance on training data but can eventually lead to poor performance on test data. In this case you have over-train/over-fit your MLP NN.
There are however methods (e.g., grid-search) for the optimization of the parameters of a classifier.