I am using opencv 3.0 version which has support for creating HDR images and trying to produce an HDR image using three images at different exposure.
And i found this tutorial of opencv.
http://docs.opencv.org/master/d3/db7/tutorial_hdr_imaging.html#gsc.tab=0
Its easy to understand but it takes paramter like exposure times for images.
How do i will get this exposure time ? I have only images. Do any one has tried it already ?
Thanks
Image exposure time is in EXIF data. In the explorer window right click the image, go to properties, you will see some of the EXIF data including exposure time if the image has that data.
or you can write a program to extract metadata to a txt file. I used python exifread() to read exposure time.
https://pypi.python.org/pypi/ExifRead
Related
I've been trying to write a program that processes image data from an esp32-cam, but in order to process it on the device, I cannot use the jpeg format to store data, but rather have to use GRAYSCALE, which limits me to a very slow frame rate. Long story short, I only need a thin, predefined sliver of the photo, and was hoping I could create a custom image format to only get that portion of the photo to speed up processing time. I'm fairly new to c++ and as such do not want to screw anything up with the packages functions; Does anyone know how one could create a custom image format in the sp32-camera library in the esp-idf framework that only takes a small portion of the camera's data?
I am using the Bumblebee2 camera and I am having trouble with acquiring stereo images from it. When I attempt to access the camera using MATLAB, the program crashes.
Does anyone know how I can acquire the stereo images using FlyCapture?
Matlab cannot read the BumbleBee 2 output directly. To do that you'll have to record the stream and process it offline. I wrote a proprietary recorder based on the code samples in the SDK. You can split the left/right images and record each one in a separate video container (e.g. using OpenCV to write a compressed avi file). Later, you can load these images into memory, and use Triclops to compute disparity maps (or alternatively, use OpenCV to run other algorithms, like semi-global block matching).
Flycapture can capture image series or video clips, but you have less control over what you get. I suggest you use the code samples to write a simple recorder, and then load your output into Matlab in standard ways. Consult the Point Grey tech support.
I am using the GDAL C++ library to reclassify raster map images and then create an output image of the new data. However when I create the new the new image and open it, the classification values don't seem to have a color defined, so I just get a black image. I can fix this by going into the image properties and setting a color for each of the 10 classification values I'm using, but that is extremely time consuming for the amount of maps and trials I am doing.
My question is, is there a way to set metadata info through the GDAL API to define a color for each classification value? Just the name of the right function would be great, I can figure it out from there.
I have tried this using ArcGIS and QuantumGIS, and both have the same problem. Also the file type I am using is Erdas Imagine (called "HFA" in GDAL).
You can use SetColorTable() method on your raster band. Easiest to do is to fetch one pre-existing raster using GetColorTable(), and pass it to your new raster.
Are there any good examples on how to create a WebM video file suitable for streaming to a web browser using the open-source WebM encoding library? Where should I begin? I am the owner of a small business, so I don't want to get into legal issues with FFMpeg, and I can't seem to figure out how the vpx_encoder.h is supposed to work. I am also interested in performing the reverse to create a video player in my application. I realize my question is similar to this one, however, I found neither of the two answers satisfactory.
To be more specific; the images are coming from a GDI+ bitmap object.
Take a look at my code, I used DEVIL to handle image file and manually convert pixels from RGB to YV12.
http://code.google.com/p/ortholab/source/browse/WebMEnc/WebMEnc.cpp
I'm building a web cam application as my C++ project in my college. I am integrating QT (for GUI) and OpenCV (for image processing). My application will be a simple web cam app that will access the web cam, show/record videos, capture images and other stuffs.
Well, I also want to put in a feature to add cliparts to captured images, or the streaming video. While on my research, I found out that there is no way we can overlay two images using OpenCV. The best alternative I was able to find was to reconfigure the whole image to add the clipart into the original image making it a single image. You see, that's not going to work for me as I have to be able to move the clipart and resize or rotate the clipart in my canvas.
So, I was wondering if anybody could tell me how to achieve the effect I want most efficiently.
I would really appreciate your help. The deadline for the project submission is closing in and its a huge bump on the road to completion. PLEEEASE... RELP!!
If you just want to stick a logo onto the openCV image then you simply define a region of interest (roi) on the destination video image and copy the source image to this (the details vary with each version of opencv)
If you want the logo to be semi transparent - like a TV channel ID - then you can copy the image but loop over the pixels writing a destination that is source_pixel/2 + dest_pixel/2;