I watched some many videos on proper codes for generating cross-validation likelihood when smooth kernel a curve and non of the package works well. I need simple codes let's say using "mtcars". Any help with that, please? And if I can change the bandwidth (h) what code to use?
I tried caret but did not work. I hope you can give me proper codes using mtars so I can use the codes for any data I may have.
Related
I have images that look as follows:
My goal is to detect and recognize the number 31197394. I have already fine-tuned a deep neural network on text recognition. It can successfully identify the correct number, if it is provided it in the following format:
The only task that remains is the detection of the corresponding bounding box. For this purpose, I have tried darknet. Unfortunately, it's not recognizing anything. Does anyone have an idea of a network that performs better on these kind of images? I know, that amazon recognition is able to solve this task. But I need a solution that works offline. So my hopes are still high that there exist pre-trained networks that work. Thank's a lot for your help!
Don't say darknet doesn't work. It depends on how you labeled your dataset. It is true that the numbers you want to recognize are too small so if you don't make any changes to the image during the pre-processing phase, it would be complicated for a neural network to recognize them well. So what you can do that will surely work is:
1---> Before labeling, increase the size of all images by 2 times its current size (like 1000*1000)
2---> used this size (1000 * 1000) for the darket trainer instead of the default size proposed by darknet which is 416 * 416. You would then have to change the configuration file
3---> use the latest darknet version (yolo v4)
4---> On the configuration file, always keep a number of subdivisions at 1.
I also specify that this method is too greedy in memory, it is therefore necessary to provide a machine with RAM > 16 GB. The advantage is that it works...
Thanks for your answers guys! You were right, I had to finetune yolo to make it work. So I created a dataset and fine-tuned yolov5. I am surprised how good the results are. Despite only having about 300 images in total, I get an accuracy of 97% to predict the correct number. This is mainly due to strong augmentations. And indeed the memory requirements are large, but I could train on a 32 GM RAM machine. I can really encourage anyone who faces similar problems to give yolo a shot!!
Maybe use an R-CNN to identify the region where the number is and then pass that region to your fine-tuned neural network for the digit classification
I would like to speedup the forward pass of classification of a CNN using caffe.
I have tried batch classification in Caffe using code provided in here:
Modifying the Caffe C++ prediction code for multiple inputs
This solution enables me to give a vector of Mat, but it does not speed up anything. Even though the input layer is modified.
I am processing pretty small images (3x64x64) on a powerful pc with two GTX1080, and there is no issue in terms of memory.
I tried also changing the deploy.prototxt, but I get the same result.
It seems that at one point the forward pass of the CNN becomes sequential.
I have seen someone pointing this out here also:
Batch processing mode in Caffe - no performance gains
Another similar thread, for python : batch size does not work for caffe with deploy.prototxt
I have seen some things about MemoryDataLayer, but I am not sure this will solve my problem.
So I am kind of lost on what to do exactly... does anyone have any information on how to speedup classification time.
Thanks for any help !
The OpenCV forum has been unavailable for a few days so i am posting this questions here. I want to implement a class in C++ that will analyze an image and determine how good that image is for feature tracking.
One approach has been explained by Vuforia.
https://developer.vuforia.com/library/articles/Solution/Natural-Features-and-Ratings
1) Number of Features
Count the number of features returned, let's say requires min 30 features.
2) Local contrast
The variance can be used as a starting point to measure how much variation there is in the image. What sort of preprocessing would this require to get the most out of this metric?
How can we improve this? With a FT or DFT transform, would it be possible to see if there is high contrast at lots of different image frequencies? How would that be achieved?
DFT -> Variance (?)
3) Feature distribution
This can be done with clustering, with a suitable center and mean+s.d. that is comparable to the image dimensions. 95% should be within mean + 2 x s.d. ideally.
4) Avoid organic shapes
This will yield no features, so is the same criteria as the number of features.
5) Avoid repetitive patterns
Match detected features against itself and make sure there aren't too many duplicates.
Vuforia do the same .
But if you want to write your own code to do the same then,
ARToolkit is open source SDK which provide same feature for NFT markers . if you go through the source code of ARToolkit then you
will find something like " DisplayFeatureSet"
There is DisplayfeatureSet.exe file also there which show the
feature(Hotspots) of selected image like:
Somehow I managed to get source code(.c) for this.
Here I providing My google Drive Link to download Source Code, Work on it and share your experience :
Source Code to Display Feature Set
Best Luck :)
As the title suggest, i want to build an ANPR application in windows. I am using Brazilian number plates. And i am using OpenCV for this.
So far i manged to extract the letters form the numberplate. Following images show some of the numbers i have extracted.
The problem i am facing is that how to recognize those letter. I tried to use Google tesseract. But it fails to recognize them sometimes. Then i tried to train an OCR data base using OpenCV i used about 10 images for each character. but it also did not work properly.
So i am stuck here. i need this for final year project.So can anybody help me?? i would really appreciate it.
Following site does it very nicely
https://www.anpronline.net/demo.html
Thank you..
you could train an ann or multi-class svm on the letter images, like here
Check out OpenALPR (http://www.openalpr.com). It already has the problem solved.
If you need to do it yourself, you really do need to train Tesseract. It will give you the best results. 10 images per character is not enough, you need dozens or hundreds. If you can find a font that is similar to your plate characters, a good approach is to print out a sheet of paper with all of the characters used multiple times. Then take 5-10 pictures of the page with your camera. These can then be your input for training Tesseract.
Here is a fairly easy question, though I have a hard time answering my own question.
We are a group of five people, that have to write a report and we have to document everything we do etc. In our project, we uses the function 'findContours();' which is a function within the OpenCV library.
We know that the 'findContours();' function runs a Grass-Fire algorithm, though we need to document what kernel we are using, which we don't have a clue about.
The function we runs looks like this:
findContours( mGreenScale, vecContours, vecHierarchy, CV_RETR_CCOMP,
CV_CHAIN_APPROX_SIMPLE );
mGreenScale: Our binary image we run the function upon.
vecCountours: The vector handler that keep tracks on which pixel is a part of the contour.
vexHierarchy: We are not quiet sure what this is, though we think its some sort of array that handles the contour hierarchy, and keeps
track of what are edge contours and what are non-edge contours.
The two other inputs to the function is unknown to us, and we think its one of those two that defines the kernel we use.
I hope anybody is capable of identifying what kernel we are using.
Thanks in advance, pleas ask for further information if you feel I missed something out of importance.
circumstantial explanation:
We're a small group of not so experienced programmer, who have limited knowledge in C++ and just begun working with OpenCV a month ago. We have a limited time schedule, and a documentation that needs to be done within two weeks. We have already been looking through this exact site: OpenCV documentation though there are still terms we don't understand.
We don't have the necessary time to check through the source code nor the experience to do so.
We think it is a grass-fire algorithm, as we know no other algorithm capable of detecting BLOBS.
What each parameter is doing is clearly explained in the OpenCV documentation
And you can find the kernel looking into the OpenCV code. It's open-source. But I am not sure if it used any kernel at all. How do you know about the algorithm if you did not check the source?