How to write on a virtual webcam in Linux? - c++

I want to capture a video from a real webcam, apply filters with openCv and write the filtered video on a virtual webcam, to stream it on web.
I don't have problem with the first 2 points, but I don't know how I can write on a virtual webcam.
It's possible?
How can I do it?
I use openCv with C++ on Debian.
Thanks

Well, actually this is possible. A quick and dirty way to do this is to use WebcamStudio.
That will create a new video device (e.g., /device/video2) that other programs see as a normal video device and can take its input from desktop, so you just set it up to capture a part of the screen that OpenCV's output is shown there.
A better but more technical way is to use the V4L2 loop back module. This way you can simply pipe the output of OpenCV to the module which is seen as a regular video device by the other programs. See the readme at the bottom of this page:
https://github.com/umlaeute/v4l2loopback
and the wiki page:
https://github.com/umlaeute/v4l2loopback/wiki
for more information.
Hope that helps.

You can also use a combination of v4l2loopback, OBS Studio and obs-v4l2sink.
Use OBS Studio to capture video from your device, then obs-v4l2sink is a small plugin that writes output into /dev/video* of your choice.
https://github.com/umlaeute/v4l2loopback/wiki/OBS-Studio
https://github.com/CatxFish/obs-v4l2sink

Related

how to use c++ to do data acquisition from frame grabber

We have an "MC1362 Camera" and an "Inspecta-5" frame grabber in our lab. There is program in LABVIEW11 which gets the data from a frame grabber, however as the Labview is slow my supervisor has told me to write a program in c++ to get the data from the frame grabber. I have no idea how to write a c++ program to connect to a frame grabber and do the data acquisition. I know how to write software in c++, but have never tried programming to connect to hardware and read data from it. Is there any specific library or framework which can help me, or any tutorial?
Please, if anybody knows, help me in this matter.
Update:just to add, we are doing medical image analysis, and a laser illuminate a subject, so camera will take pictures and pass it to the computer. I need to grab the pictures and analysis them.
You basically have a couple of options,
1 see if there is an SDK for the grabber card, if there is this is usually easier then option 2 but is of course restricted to work with that grabber or familly of grabber cards, we do it this way with the eurysys grabber cards.
2 assuming you are running on a windows platform, implement a DirectShow filtergraph and write your own ouput filter to get the data, the SDK for DirectShow is quiet good and has many examples. This approach is far more flexible and you should be able to use a number of grabber but its also alot more complex, we do it this way for USB / some other inbuilt grabbers.
Our software is done in Delphi 7 but its just importing DLLs, for C++ should be no problem and most SDK's are written round C++ anyway.
I know its not much but its a place to start.
Update
Just done a quick Google search and there is an SDK for that Grabber and on first looks its seams fairly straight forward.

Recording application output to video using FFmpeg (or similar)

We have a requirement to lets users record a video of our 3D application. I can already grab the individual rendered frames so this question is specifically about how to write frames into a video file.
I don't think writing each frame as a separate file and post-processing is a workable option.
I can look at options to record to a simple video file for later optimising/encoding, or writing directly to a sensibly encoded format.
FFmpeg was suggested in another post but it looks a bit daunting to me. Is it the best option, if not what can be suggested? We can work with LGPL but not full GPL.
We're working on Windows (Win32 not MFC) in C++. Sample/pseudo code with your recommended library is very much appreciated... basically after how to do 3 functions:
startRecording() does whatever initialization is needed
recordFrame() takes pointer to frame data and encodes it, ideally with timing data
endRecording() finalizes the video file, shuts down video system, etc
Check out the sources to Taksi on sourceforge. http://taksi.sourceforge.net/
You need 2 things.
1. A code to compress the frames.
2. A container file format. Like AVI or MPG.
Taksi useses the old VideoForWindows API and AVI not the newer COM API's but it still might work for you.

DirectShow - Getting video frames

I'm creating a Windows video capture application and am using DirectShow for capture. As each frame comes in, I want to grab it as a raw RGB bitmap into a buffer, at which point my code will do whatever processing I need.
I've been searching for samples similar to what I want to do, and everywhere I look online, people recommend using either the IMediaDet and/or the ISampleGrabber interface to do frame-by-frame capture. Unfortunately, both are deprecated and aren't even in the newest version of the Windows SDK.
What is the best (modern) way to do frame-by-frame capture in DirectShow? If there is none, is there a different library I should use that will give me frame-by-frame capture?
Sample Grabber was deprecated a few years ago, which was a few years after DirectShow development actually stopped. That is, use Sample Grabber as you read as suggested method and it is going to work great for you.
The only thing you will additionally need is to copy definitions int your source code, see details:
Alternative for ISampleGrabber
Sample Grabber replacement
ISampleGrabber deprecated: where can I find alternatives?

Grab Video Stream from FireWire

I'm trying to stream a video from a camera ( Sony HVR-Z1E ) over FireWire to my Computer. The incoming pictures/stream shall be processed further by some functions which expect the CVMat format ( from openCV ).
Well my problem is now that I have no idea how to grab the stream. Okay openCV 2.1 offers me some methods ( cvCapturefromCam ) , but no matter which parameter I give him, it always gets the stream from the webcam of the laptop and not from the firewire. I heard I need to switch the primary cam in the DirectShow API ( with the Windows SDK ). But I actually don't know how to do that either.
So any suggestions how to do this?
See my related answer here. OpenCV cannot capture video from Firewire cameras natively. You will either need to use the CMU1394 driver, or the Sony driver (if an SDK is available for it) to capture video from that camera, and then pass it to OpenCV.
Years ago, I've made something like this using DirectShow. The main limitation was fact, that image acquired via DShow was in standard PAL resolution. HD Image grabbing was not possible (it was one of the first pro-consumer HD camcorders from Sony, don't remember exact model now). Good thing was - this method didn't need anything except bare DirectShow - no additional drivers and so on. And it was VERY fast.
In general, method was something like this:
building media render graph (of course, You have to enumerate video devices in that stage)
inserting into it custom class which inherited from ISampleGrabberCB.
How it worked:
it used BufferCB() virtual method from ISampleGrabberCB - which you have to write in your inherited class.
in mentioned method, you have to leave data in global struct, and from main thread - take care of them.
I know, that's a bit fuzzy description, but I hope You'll find Your info (googling for "ISampleGrabberCB" should be a good starting point, there should be a lot of sample code).

Webcam: Programmatically adjust Webcam parameters

In our project, we would like to access the webcam image programmatically.
The main problem we have is that the webcam automatically adjusts the sensitivity depending on the brightness of the captured image.
Is there any (platform-independent) way to change this kind of parameters for the webcam (preferably any model)?
We're currently using Ubuntu 10.04, Microsoft Windows XP & 7. Programming language is C/C++.
Any idea is appreciated.
Thanks and regards
Tobias
There most likely won't be any platform independent way to do what you need. If there is, it's probably by using some high level language, which likely won't suit.
I don't know about the linux platform, but I'm a C++/windows/COM/DirectShow developer who works on internet based video applications.
On the Windows platform, capture devices are communicated with via COM and DirectShow.
For a general overview of video capture on windows, see the Video Capture section of MSDN.
Have a look at Selecting a Capture Device for information on how to enumerate the capture devices on your system. You'll need to enumerate the devices in the CLSID_VideoInputDeviceCategory, in order to discover (programmatically) the webcam as a video input device - there may be many devices in this category.
Video capture devices have a "FriendlyName" to help identify your webcam that you can store and retrieve the device for later use.
Once you've got the device, your query said you'd wanted to configure the device. Check out the Configuring a Video Capture Device for this.
DirectShow is one of Microsoft's most comprehensive (and difficult) APIs to learn. The MSDN developer forum on DirectShow is very active and beginner friendly and I highly recommend you check it out.
Finally, capture graphs aren't the easiest thing to build in DirectShow, I'd start off with a simple playback graph - e.g. playback a media file from disk and progress from there to capture graphs.
The VLC project is open source and cross-platform and it uses DirectShow for playback on the windows platform.
Good luck!