I'm using the preview pin (PIN_CATEGORY_PREVIEW) on RenderStream to display the video of a webcam, but even though the function returns 0x1, the video window that pops up is empty. I can see the webcam turning on (LED turned green), but there's no image. This is the same with a capture pin, where the buffer never gets filled.
Interestingly, the program works fine for webcams built into laptops. I've tested with two separate laptops and both worked fine. The webcam itself is fully functional as I'm able to play video using Skype or MATLAB.
Does anyone have a clue to what may be happening here?
Related
I have a video reader and writer that is from this tutorial. I would like to be able to save the video without imshow() because when the camera is running there will not be a monitor attached to the computer. When I run the code with imshow commented out it does not save the video. The file that is "saves" to says "this file contains no playable streams. When I run the code with imshow() not commented out it works perfectly. Any suggestions on how to save a video input from a camera when there is not a monitor to use imshow with?
I´m working with OMXPlayer on Raspberry.
Right now I have a loop(with Python2.7) to show videos and it works correctly.
But I have two problems:
1. When one video is finished, the Desktop will be shown for one second. And I don't want it. How can I change quickly to another video without showing the Desktop?
2. Another problem is that I wanna show some pictures too.. I know that OMXPlayer does not show images... Can I use another program in my code? But the user should not notice the change.
Thanks.
I was trying to figure this out too but I'm afraid it's not possible. It seams to me that omxplayer is only able to play a single video at a time and to play another video you have to run a new instance of the program. It takes a while to initialize, hence the gap between the videos.
That said, I figured a nasty way to get around it. When playing a video, you can extract it's last frame into a PNG file with ffmpeg. Then, you can play the video with omxplayer and use whatever graphical tools you have to display the picture fullscreen in the background. When the video ends, it disappears but the picture stays there and since it's the last frame of the video, it appears to just freeze at the end for a second, before the next video starts. Then, you just repeat the process. Hope it helps.
I've written a c++ program that receives a RTSP stream via gstreamer and displays this video via Qt5 in a QWidget. As the gstreamer videosink, I use a Widgetqt5glvideosink.
The problem is when I look at the received stream it has too much red-value in it. This only occurs when the vertical resolution exceeds +-576 pixels. (lower resolutions have no issue)
When I use cpu rendering (Widgetqt5videosink) instead of openGL rendering i get a correct image.
When I view the stream via gstreamer command line or via VLC it is also correct.
So it likes to be an issue when using an openGL rendered QWidget.
Is this an driver issue or something else?
Info:
Tested on Ubuntu16.04 and 17.04 for the viewer application.
Links:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/qtvideosink_overview.html
I managed to fix my problem by patching two files in the source code of qt-gstreamer.
There were two wrong color matrices of the colorimetry BT709.
Patch to fix red artifact in Widgetqt5glvideosink
So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.
I'm working on a software that the current version has a custom made device driver of a webcam, and we use this driver with our software, that changes the captures image before displaying it, very similar to YouCam.
Basically, when any application that uses the webcam starts, our driver runs a processing in the frame before showing it.
The problem is that there is always "2" webcams installed, the real one, and our custom driver.
I noticed that YouCam does what we need, which is, to hook some method in any installed webcam that will process each frame before showing it.
Does anyone knows how to do this?
We use VC++.
Thanks
As bkritzer said, OpenCV easily does what you want.
IplImage *image = 0; // OpenCV type
CvCapture *capture = 0; // OpenCV type
// Create capture
capture = cvCaptureFromCAM (0);
assert (capture, "Can't connect webcam");
// Capture images
while (stilCapturing)
{
// Grab image
cvGrabFrame (capture);
// Retrieve image
image = cvRetrieveFrame (capture);
// You can configure refresh time
if (image) cvWaitKey (refreshTime);
// Process your image here
//...
}
You can encapsulate these OpenCV calls into a C++ class and dedicate a specific thread for it -- these will be your driver.
I think that YouCam uses DirectShow transform filter. Is that what you need?
Check out the OpenCV libraries. It has a bunch of tutorial examples and libraries that do exactly what you're asking for. It's a bit tough to install, but I've gotten it to work before.
Well, I think there are 2 key concepts in this question that have been misunderstood:
1) How to hook webcam capture
2) ...any application that uses the webcam...
If I understood right, OpenCV is useful for writing your own complete application, complete meaning that it will open camera and will process images. So it wouldn't satisfy point 2), which I understand as referring to other application (not yours!) opening the camera, and your application processing the images.
Point 1) seems to confirm it, because "hook" is a word usually meaning interception of some other process that are not part of your own application.
So I doubt if this question is answered or not. I am also interested on it.