Classification of Lightning type in Images - c++

I need to write an application that uses image processing functionality to identify the type of lightning in an image. The lightning types that it has to identify are the cloud to ground and the intracloud lightning which are shown in the pictures below. The cloud to ground lightning has these features: it hits the ground and has flashes branching downwards and the features of the intracloud lightning are that: it has no contact with the ground. Are there any image processing algorithms that you guys know which i can use to identify these features in the image such that the application will be able to identify the lightning type? I want to implement this in C++ using the CImg library.
Thanking you in advance
!!Since I cant upload photos because am a new user, i posted the links to the images!!
http://wvlightning.com/types.shtml

Wow, this seems like a fun algorithm. If you had a large set of images for each type you might be able to use HAAR training (http://note.sonots.com/SciSoftware/haartraining.html) but I'm not sure that would work because of the form of lightning. Maybe HAAR in combination with your own algorithm. For instance it should be very straightforward to know whether the lightning goes to the ground. You could use some OpenCV image analysis to do that - http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-intro/

Related

AWS image processing

I am working on a project where I need to take a picture of a surface using my phone and then analyze the surface for defects and marks.
I want to take the image and then send it to the cloud for analysis.
Does AWS-Rekognition provide such a service to analyze the defects I want to study?
Or Would I need to write a custom code using opencv or something?
While Amazon Rekognition can detect faces and objects, it has no idea what it meant by a "defect".
Imagine if you had 10 people lined up and showed them a picture, asking them if they could see a defect. Would they all agree? They'd probably ask you what you mean by a defect and how bad something has to look before it could be considered a defect.
Similarly, you would need to train a system on what is a valid defect and what is not a defect.
This is a good use case for Amazon SageMaker. You would need to provide LOTS of sample images of defects and not-defects. They should be shot from many different angles in many different lighting situations, similar to the images you would want to test.
It would then build a model that could be used for detecting 'defects' in supplied images. You could even put the model into an AWS DeepLens unit to do the processing locally.
Please note, however, that you need to provide a large number of images (hundreds is good, thousands is better) to be able to train it to correct detect 'defects'.

Stitching a full spherical mosaic using only a smartphone and sensor data?

I'm really interested in the Google Street View mobile application, which integrates a method to create a fully functional spherical panorama using only your smartphone camera. (Here's the procedure for anyone interested: https://www.youtube.com/watch?v=NPs3eIiWRaw)
What strikes me the most is that it always manages to create the full sphere, even when stitching a feature-less near monochrome blue sky or ceiling ; which gets me to thinking that they're not using feature based matching.
Is it possible to get a decent quality full spherical mosaic without using feature based matching and only using sensor data? Are smartphone sensors precise enough? What library would be usable to do this? OpenCV? Something else?
Thanks!
The features are needed for registration. In the app the clever UI makes sure they already know where each photo is relative to the sphere so in the extreme case all the have to do is reproject/warp and blend. No additional geometry processing needed.
I would assume that they do try to do some small corrections to improve the registration, but even if these fail, you can fallback onto the sensor based ones acquired at capture time.
This is a case where a clever UI makes the vision problem significantly easier.

Face recognition using neural networks

I am doing a project on face recognition, for that I have already used different methods like eigenface, fisherface, LBP histograms and surf. But these methods are not giving me an accurate result. Surf gives good matches for exact same images, but I need to match one image with it's own different poses(wearing glasses,side pose,if somebody is covering his face) etc. LBP compares histogram of images, i.e., only color informations. So when there is high variation on lighting condition it is not showing good results. So I heard about neural networks, but I don't know much about that. Is it possible to train the system very accurately by using neural networks. If possible how can we do that?
According to this OpenCV page, there does seem to be some support for machine learning. That being said, the support does seem to be a bit limited.
What you could do, would be to:
User OpenCV to extract the face of the person.
Change the image to grey scale.
Try to manipulate so that the face is always the same size.
All the above should be doable with OpenCV itself (could be wrong, haven't messed with OpenCV in a while) so that should save you some time.
Next, you take the image, as a bitmap maybe, and feed the bitmap as a vector to the neural network. Alternatively, as #MatthiasB recommended, you could feed the features instead of individual pixels. This would simplify the data being passed, thus making the network easier to train.
As for training, you manipulate these images as above, and then feed them to the network. If a person uses glasses occasionally, you could have cases of the same person with and without glasses, etc.

Auto white balancing for camera

I am developing a sample camera and I am able to control image sensor directly. The sensors gives out Bayer image and I need to do show images as live view.
I looked at debayering codes and also white balancing. Is there any library in C/C++ that can help me in this process?
Since I need to have live view, I need to do these things very fast and hence I need algorithms that are very fast.
For example, I can change the RGB gains on sensor and hence I need an algorithm that act at that level, instead of acting on generated image.
Is there any library that help to save images in raw format?
simplecv has a function for white balance control:
simplecv project web site

stitching aerial images to create a map

I am working on a project to stitch together around 400 high resolution aerial images around 36000x2600 to create a map. I am currently using OpenCV and so far I have obtained the match points between the images. Now I am at a lost in figuring out how to get the matrix transformation of the images so I can begin the stitching process. I have absolutely no background in working with images nor graphics so this is a first time for me. Can I get some advice on how I would approach this?
The images that I received also came with a data sheet showing longitude, latitude, airplane wing angle, altitude, etc. of each image. I am unsure how accurate these data are, but I am wondering if I can use these information to perform the proper matrix transformation that I need.
Thanks
Do you want to understand the math behind the process or just have an superficial idea of whats going on and just use it?
The regular term for "image snitching" is image alignment. Feed google with it and you'll find tons of sources.
For example, here.
Best regards,
zhengtonic
In recent opencv 2.3 release...they implemented a whole process of image stitching. Maybe it is worth looking at.