YOLO V4 Model for Person Detection with Glare (From Sunlight) - computer-vision

I am looking for a pre-trained YOLO V4 model that can detect person and other objects in glare condition. I can also build a model myself if I find an open source dateset but it will be nicer if there is a trained model that I could use like the one with MS COCO dateset.

Related

How to detect Anomaly from real time CCTV video?

I build a model which can detect custom objects from CCTV footage. Where my model can detect three types of objects from the footage. But I want to detect anomalies by the model from the footage.
For training purposes, I just only use normal (which are not
anomalies) videos and my model should detect anything abnormal that
happens.

How to make haar cascade classifier for hand detection

A Classifier which detects hand in images... I am doing skin based detection for detecting hand ...
Blockquote
"but i need that the hand would be detected without skin based color"
If you want to detect without skin based color, you can train your own model with neural nets. If you don't know or how to train neural nets, I recommend that to look Cascade Classifier Training . To train your own model with both methods, you need to hand images dataset. There are many hand dataset on the internet.

How can train YOLOV3 with Stanford Drone Dataset?

I want to train my YOLOV3 with Stanford Drone Dataset but IDK how can do it. Someone has any idea?
Stanford Drone Dataset: http://cvgl.stanford.edu/projects/uav_data/
You can use AlexeyAB repository to annotate your data accordingly. There is a tool called YOLO_mark there which you can use to draw bounding boxes around objects. In the dataset you mentioned, seems like the data is already annotated. If so, you can use matlab or any other tool to convert the annotation formats to the format of YOLO, which is relative values of each box coordinates:
<object-class> <x_center> <y_center> <width> <height>

Real-time object tracking in OpenCV

I have written an object classification program using BoW clustering and SVM classification algorithms. The program runs successfully. Now that I can classify the objects, I want to track them in real time by drawing a bounding rectangle/circle around them. I have researched and came with the following ideas.
1) Use homography by using the train set images from the train data directory. But the problem with this approach is, the train image should be exactly same as the test image. Since I'm not detecting specific objects, the test images are closely related to the train images but not essentially an exact match. In homography we find a known object in a test scene. Please correct me if I am wrong about homography.
2) Use feature tracking. Im planning to extract the features computed by SIFT in the test images which are similar to the train images and then track them by drawing a bounding rectangle/circle. But the issue here is how do I know which features are from the object and which features are from the environment? Is there any member function in SVM class which can return the key points or region of interest used to classify the object?
Thank you

OpenCV removing face from face recognizer model

I am developing simple app using openCV face recognition for user authentication. I've decided to use LBPH algorithm since it doesn't require recreating model each time after adding new faces, hence they don't need to be stored.
Unfotunately I cannot find any way to remove faces from model when user decides to delete account. It may generate problems if user deletes account and then signs in again. Is there any way for removing face from model, for example by editing yaml file where model is saved? Or should I just check if face is already stored in model during registration?