strange behavior with showing images using OpenCV and Qt - c++

I'm capturing images from a Cam using OpenCV C API and send them using TCP sockets.
The server is running C++ (QT) and receive the frame.
The process is working fine and I can see the images on the server.
The weird problem is when I close both programs and rerun the client and the server, I can see the previous frame again that I saw in the previous test.
If, I close both programs again and rerun them, I can see a new frame not the second one, and the process continues.
To make it more clear:
capture1, close, cap1, close, cap3, close, cap3, close, cap5 ......etc
I didn't see something like this before!

I had the same problem before.
the frame size is pretty much and you read from the buffer in a random way (just guessing) , you have to make a timer or an acknowledge between the camera and OpenCV.
Just try to control the way the camera is capturing frames.

I dont know about TCP/IP programming or client/server much...but all I can suggest is initialize the images, generally in the constructors of the camera/client/server class ,
Mat Frame = Mat::zeros(rows,cols,CV_8UC3);
so that every time the client/server is initialized or before you are ready to exchange images...the start up image is a blank image...
you must be initializing using cvCreatImage()..so you can do the following...
IplImage *m = cvCreateImage(cvSize(200,200),8,3);// say its 200 x 200
cvZero(m);
cvShowImage("BLANK",m);
cvvWaitKey();
this shows a black image with each pixel as zero...

Of course this issue comes from the camera. It seems that camera has to receive any acknowledgment once a frame is grabbed. One thing you can try is go to the line of code that sends the image and save the image in the disk in order to check if cap1 is sent twice.

Related

HD Video Calling in Unity3D

I am an amateur in video/image processing but I am trying to create an app for HD video calling. I hope someone would see where I may be doing wrong and guide me on the right path. Here is what I am doing and what I think I understand, please correct me if you know better.
I am using OpenCV currently to grab an image from my webcam in a DLL. (I will be using this image for other things later)
Currently, the image that opencv gives me is a Opencv::Mat. I resized this and converted to a byte array size of a 720p image, which is about 3 Megapixels.
I pass this ptr back to my C# code then I can now render this onto a texture.
Now I created a TCP socket and connect the server and client and start to transmit previously gotten image byte array. I am able to transmit the byte array over to the client then I use the GPU to render it to a texture.
Currently, there is a big delay of about 4-500ms delay. This is after I tried compressing the buffer with gzipstream for unity. It was able to compress the byte array from about 3 million bytes to 1.5 million. I am trying to get this to smallest as possible and also fastest as possible but this is where I am completely lost. I saw that Skype requires only 1.2Mbps connection for a 720p video calling at 22 fps. I have no idea how they can achieve such a small frame, but of course I don't need it to be that small. I need to be at least decent.
Please give me a lecture on how this can be done! And let me know if you need anything else from me.
I found a link that may be very useful to anyone working on something similar. https://www.cs.utexas.edu/~teammco/misc/udp_video/
https://github.com/chenxiaoqino/udp-image-streaming/

OpenCV : Open Mobotix Camera Feed

I have a Mobotix Camera. It is an IP Camera. In the API they offer us the possibility to get the feed via
http:// [user]:[password]#[ip_adress]:[port]/cgi-bin/faststream.jpg?[options]
What I've tried is to open it like a normal webcam feed :
cv::VideoCapture capture("http://...");
cv::Mat frame;
if (capture.isOpened())
// always false anyway.
while(1)
{
capture.read(frame);
cv::imshow("Hi there", frame);
cv::waitkey(10);
}
FYI : Developer Mobotix API Docs
EDIT : Now thanks to berak I just had to add &data=v.mjpg to the options :
?stream=full&fps=5.0&noaudio&data=v.mjpg
Note that in v.mjpg, only the [dot]mjpg is important, you could as well put myfile.mjpg.
Now the problem is the speed at which the feed update. I got a 2 seconds delay, plus the feed is very very slow.
And when I change the stream option for MxJPG or mxg I get a corrupted image where the bytes aren't ordering properlly.
EDIT : I tried to change the camera parameters directly with the mobotix control center but only the resolution affected my OpenCV program, without actually changing the speed at which I access the images.
for max speed use fps=0 Its in the api docs
something like
http://cameraip/cgi-bin/faststream.jpg?stream=full&fps=0
see http://developer.mobotix.com/paks/help_cgi-image.html
faststream is the mjpeg stream (for image capture) , make sure mxpeg is turned off and pick the smallest image that gives you enough resolution. i.e get it working using 640 by 480 (set it camera webgui) then increase the image size.
Note this is for image capture not video and you need to detect the beginning and end of each jpeg then copy from receive buffer into memory.
vlc can handle mxpeg ,but need to either start from command line with vlc --ffmpeg-format=mxg or set an edit option ffmpeg-format=mxg in the gui
see https://wiki.videolan.org/MxPEG
I know this post is quite old but I thought to answer for anyone else who comes across this issue. To get a stream without frame rate limitations you need to use a different CGI command:
http://<camera_IP>/control/faststream.jpg?stream=full&fps=0
As per the camera's on-line help:
http://<camera_IP>/cgi-bin/faststream.jpg (guest access)
http://<camera_IP>/control/faststream.jpg (user access)
The default limitation of the "guest" access is indeed 2 fps but it can be modified from the page Admin Menu > Language and Start Page.
A detailed description of how to retrieve a live stream from a MOBOTIX camera is available at the following link: https://community.mobotix.com/t/how-to-access-a-live-stream-with-a-video-client-e-g-vlc/202

Intercepting video frames from game

I would like to grab video frames (images) from a game that is launched at PC at the moment.
XSplit Broadcaster has such functionality. It somehow listing the processes that are actually video games and allows to grab video frames.
As far as I understand, it can be accomplished by enumerating Direct3D surfaces that are running at the moment and grab the picture from it.
Am I correct? What is the solution for OpenGL games then?
Have you checked out glReadPixels()? I have used it before. It is a little slow though.
Try
glReadPixels(0,0,width, height,GL_RGB, GL_UNSIGNED_BYTE,buffer);
apitrace seems able to capture frames using Ye Olde LD_PRELOAD Tricke.

Qt image I/O and QPixmap::grabWindow

I'm writing a kind of "Remote Desktop" program and I got stuck with a few points.
I use QPixmap::grabWindow on the server side to capture the screenshot and send it to client, which in turn is written to QByteArray and is sent via QTcpSocket.
The size of the resulting QPixmap is too big and as you understand the application is time critical. Is there a way to optimize that?
(In addition to Michael's more detailed answer:) For compression you can use qCompress / qUncompress (which actually depends on Qt's included zlib) http://qt-project.org/doc/qt-4.8/qbytearray.html#qUncompress
Use deltas. The basic idea is this: imagine a grid overlaying the window image, that divides it into 16px by 16px or so squares. Compare each square with the corresponding one in the previous window that was sent to the client. If so much as one pixel has changed, send the square's new content to the client.
Try compressing the image using some form of quick compression. You could use zlib for example, but keep the compression level at 3 or below. Or you could compress the entire data stream as it is being sent via TCP (this is tricky - you have to be careful to flush buffers and such.)
Adding to Michaels answer:
Reduce resolution
Reduce color depth
Reduce frame rate
Use a screencast codec / decoder

Combining Direct3D, Axis to make multiple IP camera GUI

Right now, what I'm trying to do is to make a new GUI, essentially a software using directX (more exact, direct3D), that display streaming images from Axis IP cameras.
For the time being I figured that the flow for the entire program would be like this:
1. Get the Axis program to get streaming images
2. Pass the images to the Direct3D program.
3. Display the program, on the screen.
Currently I have made a somewhat basic Direct3D app that loads and display video frames from avi videos(for testing). I dunno how to load images directly from videos using DirectX, so I used OpenCV to save frames from the video and have DX upload them up. Very slow.
Right now I have some unclear things:
1. How to Get an Axis program that works in C++ (gonna look up examples later, prolly no big deal)
2. How to upload images directly from the Axis IP camera program.
So guys, do you have any recommendations or suggestions on how to make my program work more efficiently? Anything just let me know.
Well you may find it faster to use directshow and add a custom renderer at the far end that, directly, copies the decompressed video data directly to a Direct3D texture.
Its well worth double buffering that texture. ie have texture 0 displaying and texture 1 being uploaded too and then swap the 2 over when a new frame is available (ie display texture 1 while uploading to texture 0).
This way you can de-couple the video frame rate from the rendering frame rate which makes dropped frames a little easier to handle.
I use in-place update of Direct3D textures (using IDirect3DTexture9::LockRect) and it works very fast. What part of your program works slow?
For capture images from Axis cams you may use iPSi c++ library: http://sourceforge.net/projects/ipsi/
It can be used for capturing images and control camera zoom and rotation (if available).