How to capture high resolution image on Windows Mobile - c++

I would like to capture high resolution image with Windows Mobile device. I've tried the example from WM SDK, but it captures just a single frame of video camera and the resolution is poor.
Has anyone any experience with image capturing on Pocket PC with C++?
Thanks

You need to change the filter used by the example code to capture a high-resolution image. When you use the viewfinder in a digital camera, the camera "simulates" a video camera look by applying the lowest resolution filter and then rapidly taking and displaying single frames. When you click the button to take a high-res picture, the camera has to swap out the low-res filter for the high-res filter and then take the high-res picture - this is why (cheap) digital cameras always take so long to snap a picture.
I don't know which code example you're working with, but if it's the one I used it defaults to using the lowest resolution filter. There should be a line in it somewhere that selects the filter. You just need to change the value passed from 0 to (probably) 3 or 4 for the highest resolution.

Related

Stereo Images Computer Vision

I wanted to capture a pair of stereo images with my iPhone camera. I wanted to know how could I do it. What ai have tried so far is, clicked one photo moved my phone a little to the right (almost 1.5 inches), but I did not get the desired results.
I want to rectify those images before I could apply opencv's StereoBM_create function to them. But I can't capture proper stereo images.

Feed GStreamer sink into OpenPose

I have a custom USB camera with a custom driver on a custom board Nvidia Jetson TX2 that is not detected through openpose examples. I access the data using GStreamer custom source. I currently pull frames into a CV mat, color convert them and feed into OpenPose on a per picture basis, it works fine but 30 - 40% slower than a comparable video stream from a plug and play camera. I would like to explore things like tracking that is available for streams since Im trying to maximize the fps. I believe the stream feed is superior due to better (continuous) use of the GPU.
In particular the speedup would come at confidence expense and would be addressed later. 1 frame goes through pose estimation and 3 - 4 subsequent frames are just tracking the object with decreasing confidence levels. I tried that on a plug and play camera and openpose example and the results were somewhat satisfactory.
The point where I stumbled is that I can put the video stream into CV VideoCapture but I do not know, however, how to provide the CV video capture to OpenPose for processing.
If there is a better way to do it, I am happy to try different things but the bottom line is that the custom camera stays (I know ;/). Solutions to the issue described or different ideas are welcome.
Things I already tried:
Lower resolution of the camera (the camera crops below certain res instead of binning so cant really go below 1920x1080, its a 40+ MegaPixel video camera by the way)
use CUDA to shrink the image before feeding it to OpenPose (the shrink + pose estimation time was virtually equivalent to the pose estimation on the original image)
since the camera view is static, check for changes between frames, crop the image down to the area that changed and run pose estimation on that section (10% speedup, high risk of missing something)

What's the best way to fix Pixel Aspect Ratio (PAR) issues on DirectShow?

I am using a DirectShow filtergraph to grab frames from videos. The current implementation follows this graph:
SourceFilter->SampleGrabber->NullRenderer
This works most of the time to extract images frame by frame for further processing. However I encountered issues with some videos that do not have a PAR of 1:1. These images occur stretched in my processing steps.
The only way to fix this I have found for now is to use a VMR9 renderer in windowless mode that uses GetCurrentImage() to extract a bitmap with the correct aspect ratio. But this method is not very useful for continuous grabbing of thousands of frames.
My question now is: what is the best way to fix this problem? Has anyone run into this issue as well?
Sample Grabber gets you frames with original pixels. It is not exactly a problem if there is aspect ratio attached and the pixels are not "square pixels". To convert to square pixels you simply need to stretch the image respectively. It would be easier for you to do this scale step outside of DirectShow pipeline, and you have all data you need: pixels and original media type. You can calculate the corresponding resolution with square pixels and resample the picture.

Getting and Setting Camera Settings

I have been searching around and can't find an example of how to get and set the camera capturing settings. For example the capturing resolution, fps, color balance, etc. I have only seen examples of how to change the settings when saving the captured video but I want to be able to find all the camera's capturing modes and choose which one I want. For example, I am using the PS3eye webcam and in the test program it allows you to change the settings (320x240 at 15,30,60,120 fps, 640x480 at 15,30,60,75 fps). So is there a function in OpenCV for getting all the camera's capture modes and choosing one? I remember in OpenFrameworks there was a function to change these settings but I would like to know how to do it in OpenCV.
Here is the code for OpenFrameworks with OpenCV that does sort of what I want:
vidGrabber.setDeviceID( 4 );
vidGrabber.setDesiredFrameRate( 30 ); //I want this
vidGrabber.videoSettings();
vidGrabber.setVerbose(true);
vidGrabber.initGrabber(320,240); //And this
cvSetCaptureProperty()
with these flags:
CV_CAP_PROP_FRAME_WIDTH - width of frames in the video stream (only for cameras)
CV_CAP_PROP_FRAME_HEIGHT - height of frames in the video stream (only for cameras)
CV_CAP_PROP_FPS - frame rate (only for cameras)
I built a Directshow camera library which able to acquire the camera resolutions and properties.
https://github.com/kcwongjoe/directshow_camera
The fps is usually depended on your machine, resolution and exposure time. To change the fps, you can modify the exposure time in a fixed resolution.

Combining Direct3D, Axis to make multiple IP camera GUI

Right now, what I'm trying to do is to make a new GUI, essentially a software using directX (more exact, direct3D), that display streaming images from Axis IP cameras.
For the time being I figured that the flow for the entire program would be like this:
1. Get the Axis program to get streaming images
2. Pass the images to the Direct3D program.
3. Display the program, on the screen.
Currently I have made a somewhat basic Direct3D app that loads and display video frames from avi videos(for testing). I dunno how to load images directly from videos using DirectX, so I used OpenCV to save frames from the video and have DX upload them up. Very slow.
Right now I have some unclear things:
1. How to Get an Axis program that works in C++ (gonna look up examples later, prolly no big deal)
2. How to upload images directly from the Axis IP camera program.
So guys, do you have any recommendations or suggestions on how to make my program work more efficiently? Anything just let me know.
Well you may find it faster to use directshow and add a custom renderer at the far end that, directly, copies the decompressed video data directly to a Direct3D texture.
Its well worth double buffering that texture. ie have texture 0 displaying and texture 1 being uploaded too and then swap the 2 over when a new frame is available (ie display texture 1 while uploading to texture 0).
This way you can de-couple the video frame rate from the rendering frame rate which makes dropped frames a little easier to handle.
I use in-place update of Direct3D textures (using IDirect3DTexture9::LockRect) and it works very fast. What part of your program works slow?
For capture images from Axis cams you may use iPSi c++ library: http://sourceforge.net/projects/ipsi/
It can be used for capturing images and control camera zoom and rotation (if available).