I want to list all connected webcams (USB-webcams and internal webcams), using C++, OpenCV 2.4.11, Windows 8.1 and Qt Creator 3.4.2.
For me it is enough to get the number of accessible webcams in the following way:
VideoCapture videoCapture(0); // Will access my internal laptop webcam.
VideoCapture videoCapture(1); // Will access the first connected usb webcam.
Here is my code:
// Following procedure detects, how many webcams are accessible from 0 on upwards.
numberOfDevices = 0;
bool noError = true;
while (noError)
{
try
{
// Check if camera is available.
VideoCapture videoCapture(numberOfDevices); // Will crash if not available, hence try/catch.
// ...
}
catch (...)
{
noError = false;
}
// If above call worked, we have found another camera.
numberOfDevices++;
}
The code in the try-block works if I have activated my internal webcam. The call fails when I deactivate the internal camera in the hardware manager (and no other camera is attached to my laptop) with the following error message (debug mode):
Exception Triggered
---------------------------
The inferior stopped because it triggered an exception.<p>Stopped in thread 0 by:
Exception at 0x7ff8533d9090, code: 0xc0000005: read access violation at: 0x0, flags=0x0 (first chance).
and the following 2 build problems:
Exception at 0x7ff871af8b9c, code: 0xa1a01db1: , flags=0x0 (first chance)
Exception at 0x7ff8533d9090, code: 0xc0000005: read access violation at: 0x0, flags=0x0 (first chance)
How can I fetch the occuring error? As you see, try/catch does not work.
Or is there a method where I can access all available webcams in OpenCV without such a dirty loop?
There is still no any functionality related to camera count in OpenCV at the current moment (3.0.0 version) - see corresponding ticket.
Proper camera handling seems like OpenCV internal problem (for example, described here or here). Usually it appears in capture code after physically disabling camera while it is still opened in OpenCV (when we try to read destroyed file descriptor).
Generally you can even implement your own handler for access violations (please look into this thread), but it's really dirty trick.
I created this C++ class that allows enumerating devices (including the ID) to be used inside OpenCV. It is hosted on GitHub.
https://github.com/studiosi/OpenCVDeviceEnumerator
The idea is to use DirectShow to get all the devices that have the category with GUID CLSID_VideoInputDeviceCategory, and then, through an enumerator, you get in which order they appear on the system, which is the ID you need to open them on OpenCV by creating a VideoCapture object (by using the constructor that receives the ID, which would be the index of the device on the enumeration). Obviously, this approach only works on Windows.
Related
So we are using a stack consisting of c++ media foundation code in order to playback video files. An important requirement is the ability to play these videos in constantly repeating sequences, so every single video slot will periodically change the video it is playing. In our current example we are creating 16 HWNDs to render video into and 16 corresponding player objects. The main application loops over all of them one after another and does the following:
Shutdown the last player
Release the object
CoCreateinstance for a new player
Initialize the player with the (old) HWND
Start Playback
The media player is called "MediaPlayer2", this needs to be built and registered as COM (regsvr32). The main application is to be found in the TAPlayer2 Project. It searches for the CLSID of the player in the registry and instantiates it. As current test file we use a test.mp4 that has to reside on the disk like C:\test.mp4
Now everything goes fine initially. The program loops through the players and the video keeps restarting and playing. The memory footprint is normal and all goes smooth. After a timeframe of anything between 20 minutes and 4 days, all of the sudden things will get weird. At this point it seems as if calls to "InitializeRenderer" by the EVR slow down and eventually don't go through anymore at all. With this, also thread count and memory footprint will start to increase drastically and after a certain amount of time depending on existing RAM all the memory will be exhausted and our application crashes, usually somewhere in the GPU driver or near the EVR DLL.
I am happy to try out any other code examples that propose to solve my issue: displaying multiple video windows at the same time, and looping through them like in a playlist. Needs to be running on Windows 10!
I have been going at this for quite a while now and am pretty hard stuck. I uploaded the above mentioned code example and added the link to this post. This should work out of the box afaik. I can also provide code excerpts in here in the thread if that is preferred.
Any help is appreciated, thanks
Thomas
Link to demo project (VS2015): https://filebin.net/l8gl79jrz6fd02vt
edit: the following code from the end of winmain.cpp is used to restart the players:
do
{
for (int i = 0; i < PLAYER_COUNT; i++)
{
hr = g_pPlayer[i]->Shutdown();
SafeRelease(&g_pPlayer[i]);
hr = CoCreateInstance(CLSID_AvasysPlayer, // CLSID of the coclass
NULL, // no aggregation
CLSCTX_INPROC_SERVER, // the server is in-proc
__uuidof(IAvasysPlayer), // IID of the interface we want
(void**)&g_pPlayer[i]); // address of our interface pointer
hr = g_pPlayer[i]->InitPlayer(hwndPlayers[i]);
hr = g_pPlayer[i]->OpenUrl(L"C:\\test.mp4");
}
} while (true);
Some MediaFoundation interface like
IMFMediaSource
IMFMediaSession
IMFMediaSink
need to be Shutdown before Release them.
At this point it seems as if calls to "InitializeRenderer" by the EVR slow down and eventually don't go through anymore at all.
... usually somewhere in the GPU driver or near the EVR DLL.
a good track to make a precise search in your code.
In your file PlayerTopoBuilder.cpp, at CPlayerTopoBuilder::AddBranchToPartialTopology :
if (bVideo)
{
if (false) {
BREAK_ON_FAIL(hr = CreateMediaSinkActivate(pSD, hVideoWnd, &pSinkActivate));
BREAK_ON_FAIL(hr = AddOutputNode(pTopology, pSinkActivate, 0, &pOutputNode));
}
else {
//// try directly create renderer
BREAK_ON_FAIL(hr = MFCreateVideoRenderer(__uuidof(IMFMediaSink), (void**)&pMediaSink));
CComQIPtr<IMFVideoRenderer> pRenderer = pMediaSink;
BREAK_ON_FAIL(hr = pRenderer->InitializeRenderer(nullptr, nullptr));
CComQIPtr<IMFGetService> getService(pRenderer);
BREAK_ON_FAIL(hr = getService->GetService(MR_VIDEO_RENDER_SERVICE, __uuidof(IMFVideoDisplayControl), (void**)&pVideoDisplayControl));
BREAK_ON_FAIL(hr = pVideoDisplayControl->SetVideoWindow(hVideoWnd));
BREAK_ON_FAIL(hr = pMediaSink->GetStreamSinkByIndex(0, &pStreamSink));
BREAK_ON_FAIL(hr = AddOutputNode(pTopology, 0, &pOutputNode, pStreamSink));
}
}
You create a IMFMediaSink with MFCreateVideoRenderer and pMediaSink. pMediaSink is release because of the use of CComPtr, but never Shutdown.
You must keep a reference on the Media Sink and Shutdown/Release it when the Player Shutdown.
Or you can use a different approach with MFCreateVideoRendererActivate.
IMFMediaSink::Shutdown
If the application creates the media sink, it is responsible for calling Shutdown to avoid memory or resource leaks.
In most applications, however, the application creates an activation object for the media sink, and the Media Session uses that object to create the media sink.
In that case, the Media Session — not the application — shuts down the media sink. (For more information, see Activation Objects.)
I also suggest you to use this kind of code at the end of CPlayer::CloseSession (after release all others objects) :
if(m_pSession != NULL){
hr = m_pSession->Shutdown();
ULONG ulMFObjects = m_pSession->Release();
m_pSession = NULL;
assert(ulMFObjects == 0);
}
For the use of MFCreateVideoRendererActivate, you can look at my MFNodePlayer project :
MFNodePlayer
EDIT
I rewrote your program, but i tried to keep your logic and original source code, like CComPtr/Mutex...
MFMultiVideo
Tell me if this program has memory leaks.
It will depend on your answer, but then we can talk about best practices with MediaFoundation.
Another thought :
Your program uses 1 to 16 IMFMediaSession. On a good computer configuration, you could use only one IMFMediasession, i think (Never try to aggregate 16 MFSource).
Visit :
CustomVideoMixer
to understand the other way to do it.
I think your approach to use 16 IMFMediasession is not the best approach on a modern computer. VuVirt talk about this.
EDIT2
I've updated MFMultiVideo using Work Queues.
I think the problem can be that you call MFStartup/MFShutdown for each players.
Just call MFStartup/MFShutdown once in winmain.cpp for example, like my program does.
I am very new to the EDSDK so sorry for maybe weird question in some places.
Is it possible to access a video stream and perform some operations on it using the SDK? I need this to capture very thin region (ROI) of a specified size (for example 3840x10 px) for each frame in the stream. Don`t understand this as compression of a frame, aspect ratios are not needed to follow. These changes in theory should increase fps, because the region will be very thin (Should they?).
I found the code snippet below from the official documentation, although it seems this causes only to send a signal for starting and stopping video rec, without accessing the stream.
EdsUInt32 record_start = 4; // Begin movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_start), &record_start);
EdsUInt32 record_stop = 0; // End movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_stop), &record_stop);
I would be very thanksful for any suggestions and help. Please feel free to ask any additional information!
This sdk doesnt allow you to directly get access to hi res streams like industrial cams would. You can access over USB ~960x640 liveview images in sequential JPGs. Movie recording can only be done to internal card, and after stopping transfering the result. Outside of this SDk, use of an external HDMI recorder gives access to a near realtime feed at max FullHD1080p, depending on model and not always “clean”.
I am writing desktop app for windows on C++ using Qt for GUI and GStreamer for audio processing.
In my app I need to monitor several internet aac audio streams if they are online, and listen to available stream that has the most priority. For this task I use GstDiscoverer object from GStreamer, but I have some problems with it.
I check audio streams every 1-2 seconds, so GstDiscoverer is called very often.
And every time my app is running, eventually it will crash with segmentation fault error during GstDiscoverer check.
I tried both sync and async methods of calling GstDiscoverer ( gst_discoverer_discover_uri(), gst_discoverer_discover_uri_async() ) , both work the same way.
The crash happens in aac_type_find() function from gsttypefindfunctions.c on line 1122 (second line of code below).
len = ((c.data[offset + 3] & 0x03) << 11) |
(c.data[offset + 4] << 3) | ((c.data[offset + 5] & 0xe0) >> 5);
Local variables received from debugger during one of crashes:
As we can see, offset variable is greater than c.size, so c.data[offset] is out of range, I think that's why segmentation fault happens.
This happens not regularly. The program can work several hours or ten minutes.
But it seems to me that it happens more often if time interval between calls of GstDiscoverer is little. So, there is some probability of crash calling aac_type_find().
I tried GStreamer versions 1.6.1 and latest 1.6.2, the bug exists in both.
Can somebody help me with this problem? Is this Gstreamer bug or maybe I do something wrong?
It was reported to the GStreamer project here and a patch for the crash was merged and will be in the next releases: https://bugzilla.gnome.org/show_bug.cgi?id=759910
I have an issue when finalizing a video recording into an .mp4 using Media Foundation where the call to IMFSinkWriter->Finalize(); hangs forever. It doesn't always happen, and can happen on almost any machine (seen on Windows server, 7, 8, 10). Flush() is called on the audio and video streams before hand and no new samples are added between Flush and Finalize. Any ideas on what could cause Finalize to hang forever?
Things I've tried:
Logging all HRESULTs to check for any issues (was already checking them before proceeding to the next line of code)
Everything comes back as S_OK, not seeing any issues
Added the IMFSinkWriterCallback on the stream to get callbacks when
the stream process markers (adding markers every 10 samples) and finishes Finalize()
Haven't been able to reproduce since adding this but this would give the best information about what's going on when I get it working.
Searched code samples online to see how others are setting up the
Sink Writer and how Finalize() is used
Didn't find many samples and it looks like my code is similar to the ones that were found
Looked at encoders available and used by each system including version of the encoder dll
Encoders varied between AMD H.264 Hardware MFT Encoder and H264 Encoder MFT on machines that could reproduce the issue. Versions didn't seem to matter and some of the machines were up to date with video drivers.
Here are some code samples without any HRESULT checking (that doubled the amount of code so I took it out)
Building the sink sample:
CComPtr<IMFAttributes> pAttr;
::MFCreateAttributes( &pAttr, 4 );
pAttr->SetGUID( MF_TRANSCODE_CONTAINERTYPE, GetFileContainerType() );
pAttr->SetUINT32( MF_LOW_LATENCY, FALSE ); // Allows better multithreading
pAttr->SetUINT32( MF_SINK_WRITER_DISABLE_THROTTLING, TRUE ); // Does not block
pAttr->SetUINT32( MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE );
m_pCallback.Attach( new MFSinkWriterCallback() );
pAttr->SetUnknown( MF_SINK_WRITER_ASYNC_CALLBACK, m_pCallback );
::MFCreateSinkWriterFromURL( m_strFilename.c_str(), NULL, pAttr, &m_pSink );
if ( m_pVideoInputType && m_pVideoOutputType )
{
m_pSink->AddStream( m_pVideoOutputType, &m_dwVideoStreamId );
// Attributes for encoding?
CComPtr<IMFAttributes> pAttrVideo;
// Not sure if these are needed
//::MFCreateAttributes( &pAttrVideo, 5 );
m_pSink->SetInputMediaType( m_dwVideoStreamId, m_pVideoInputType, pAttrVideo );
}
if ( m_pAudioInputType && m_pAudioOutputType )
{
m_pSink->AddStream( m_pAudioOutputType, &m_dwAudioStreamId );
// Attributes for encoding?
CComPtr<IMFAttributes> pAttrAudio;
// Not sure if these are needed
//::MFCreateAttributes( &pAttrAudio, 2 );
//pAttrAudio->SetGUID( MF_MT_SUBTYPE, MFAudioFormat_AAC );
//pAttrAudio->SetUINT32( MF_MT_AUDIO_BITS_PER_SAMPLE, 16 );
m_pSink->SetInputMediaType( m_dwAudioStreamId, m_pAudioInputType, pAttrAudio );
}
m_pSink->BeginWriting();
Stopping the recording sample:
if ( m_dwVideoStreamId != (DWORD)-1 )
{
m_sink->Flush( m_dwVideoStreamId );
}
if ( m_dwAudioStreamId != (DWORD)-1 )
{
m_sink->Flush( m_dwAudioStreamId );
}
m_sink->Finalize();
There is a lot of situation where a Media Foundation application can hang :
Calling MFShutDown/CoUninitialize when using Media Foundation objects.
Using a GUI, and doing bad use of windows message pump in multithreaded application.
Bad use of MTA/STA components.
Bad use of critical section/wait for event function.
Forget to call EndXXX() function when BeginXXX() function are used.
Bad use of Callback function.
Forget to call AddRef when necessary, and releasing the object used by another thread.
A bug in Media Foundation (there are some on Windows Seven).
and so on...
When i say a minimal source code, i mean, isolate the source code that do the encoding process, and provide it to Github if it's too large. It's better if we can compile and try the source code, because deadlock are difficult to find.
I'm trying to connect to several USB cameras that are connected to different USB buses (hopefully cancelling the bandwidth bottleneck caused by the USB).
Before accessing to the cameras, I'm probing them with V4L2 API directly to see if they're accessible. For now, there are 3 cameras and V4L2 sees them all.
Then I'm trying to access them using openCV like this, each in its own object:
this->camera = new cv::VideoCapture(camera_port);
if(this->camera->isOpened()) {
....
cv::Mat capturedImage;
bool read;
read = this->camera->read(capturedImage);
....
}
where camera_port is 0,1,2.
Obviously this->camera is called with release() on program closure.
The problem arises when I'm accessing more than 1 camera. Only one camera out of the three returns an image. The others return errors that I'm seeing on the console (not the same on every run):
libv4l2: error turning on stream: Connection timed out,
VIDIOC_STREAMON: Connection timed out
libv4l2: error turning on stream: Input/output error,
VIDIOC_STREAMON: Input/output error
libv4l2: error turning on stream: Invalid argument,
VIDIOC_STREAMON: Invalid argument
Some other errors, but the above are the most frequent
However, this does work on the first run after replugging-in the USB cameras, but not on further runs of the program.
Thoughts?