I am working on a Universal Windows Platform Application (UWP) in which I am using C++ as the main language. I want to read from two cameras at the same time. One camera belongs to the Kinect RGB camera and the other to the Kinect Depth camera. So far I've managed to read from just one using this piece of code:
void SDKTemplate::Scenario4_ReproVideo::Grabar_Click(Platform::Object^
sender, Windows::UI::Xaml::RoutedEventArgs^ e)
{
CameraCaptureUI^ dialog = ref new CameraCaptureUI();
dialog->VideoSettings->Format = CameraCaptureUIVideoFormat::Mp4;
Windows::Foundation::Collections::IPropertySet^ appSettings = ApplicationData::Current->LocalSettings->Values;
concurrency::task<StorageFile^>(dialog->CaptureFileAsync(CameraCaptureUIMode::Video)).then([this](StorageFile^ file) {
if (file != nullptr) {
concurrency::task<Streams::IRandomAccessStream^> (file->OpenAsync(FileAccessMode::Read)).then([this] (Streams::IRandomAccessStream^ stream){
CapturedVideo->SetSource(stream, "video/mp4");
logger->Text = "recording";
});
Windows::Foundation::Collections::IPropertySet^ appSettings = ApplicationData::Current->LocalSettings->Values;
appSettings->Insert("CapturedVideo", PropertyValue::CreateString(file->Path));
}
else {
logger->Text = "Something went wrong or was cancelled";
}
});
}
By doing this I can reliably record from one of the cameras. My problem is that I need to record from both cameras at the same time as I need the Depth and RGB to process the video.
I am new to concurrency, is there a way (the simpler the better) to achieve two recordings simultaneously?
In UWP app, we can capture photos and video using the MediaCapture class which provides functionality for capturing photos, audio, and videos from a capture device. See the topic Basic photo, video, and audio capture with MediaCapture.
We can initialize multiple MediaCapture instances then read frame by using MediaFrameReader Class. see the topic Discover and select camera capabilities with camera profiles and Process media frames with MediaFrameReader and also look into the official sample CameraFrames.
Besides, there is a similar thread about UWP multiple camera capture, you can also refer it:
Handle multiple camera capture UWP
Related
Currently, in my project, we are using gtkmm pixbuf create_from_file or create_from_date which hangs up whole GUI for 1-2 seconds in case of high-resolution images and in case of loading multiple images for a screen it becomes awfully slow. Is it possible to load images asynchronously in gtkmm for the above two functions? I am able to find methods in gtk for loading images asynchronously but not in gtkmm. An example would be helpful since I am unable to find anything related to it.
if(!imageName.empty())
{
//Load image in pixbuf
picPixBuff = Gdk::Pixbuf::create_from_file(imageName);
picPixBuff = picPixBuff->scale_simple(150,35,Gdk::INTERP_BILINEAR);
}
I have gone through this.
Related Question - How to load a widget as a different thread in gtk? (vala)
There are docs for that. That example does exactly what you are asking for.
These are the functions that do the magic. The example is easy to follow.
// notify() is called from ExampleWorker::do_work(). It is executed in the worker
// thread. It triggers a call to on_notification_from_worker_thread(), which is
// executed in the GUI thread.
void ExampleWindow::notify()
{
m_Dispatcher.emit();
}
void ExampleWindow::on_notification_from_worker_thread()
{
if (m_WorkerThread && m_Worker.has_stopped())
{
// Work is done.
if (m_WorkerThread->joinable())
m_WorkerThread->join();
delete m_WorkerThread;
m_WorkerThread = nullptr;
update_start_stop_buttons();
}
update_widgets();
}
I am writing a C++ program using SDL 2 for the platform layer and opengl for graphics and rendering. I have a full working prototype with keyboard and mouse input. Now I am now trying to use SDL's game controller API to connect a gamepad (to replace or supplement keyboard controls). Unfortunately the controller does not seem to be recognized despite the fact that it works perfectly with other software. It's a Sony Dualshock 4 (for the Playstation 4 system). My system is Mac OS 10.9.5, and I am using SDL 2.0.5 with the official community controller database for SDL 2.0.5, which contains ps4 controller mappings:
030000004c050000c405000000000000,PS4 Controller,a:b1,b:b2,back:b8,dpdown:h0.4,dpleft:h0.8,dpright:h0.2,dpup:h0.1,guide:b12,leftshoulder:b4,leftstick:b10,lefttrigger:a3,leftx:a0,lefty:a1,rightshoulder:b5,rightstick:b11,righttrigger:a4,rightx:a2,righty:a5,start:b9,x:b0,y:b3,platform:Mac OS X,
4c05000000000000c405000000000000,PS4 Controller,a:b1,b:b2,back:b8,dpdown:h0.4,dpleft:h0.8,dpright:h0.2,dpup:h0.1,guide:b12,leftshoulder:b4,leftstick:b10,lefttrigger:a3,leftx:a0,lefty:a1,rightshoulder:b5,rightstick:b11,righttrigger:a4,rightx:a2,righty:a5,start:b9,x:b0,y:b3,platform:Mac OS X
I also added a new mapping using one of the official tools. That also loads successfully according to the relevant function call.
The following is my code, and it's about as close to a minimal example as I can get:
// in main
// window and graphics context initialization here
// initialize SDL
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_GAMECONTROLLER | SDL_INIT_HAPTIC) < 0) {
fprintf(stderr, "%s\n", "SDL could not initialize");
return EXIT_FAILURE;
}
// load controller mappings, I tested this and 35 mappings load successfully, which is expected
SDL_GameControllerAddMappingsFromFile("./mapping/gamecontrollerdb_205.txt");
// the controller handle
SDL_GameController* controller = nullptr;
// max_joysticks is 1, which means that the device connects at least
int max_joysticks = SDL_NumJoysticks();
if (max_joysticks < 1) {
return EXIT_FAILURE;
}
// this returns, which means that the joystick exists, but it isn't recognized as a game controller.
if (!SDL_IsGameController(0)) {
return EXIT_FAILURE;
}
// I never get passed this.
controller = SDL_GameControllerOpen(0);
fprintf(stdout, "CONTROLLER: %s\n", SDL_GameControllerName(controller));
Has anyone encountered this problem? I've done some preliminary searching as I mentioned, but it seems that usually either the number of joysticks is 0, or everything is recognized.
Also, SDL_CONTROLLERDEVICEADDED isn't firing when I connect the controller.
The controller is connected via USB before I start the program. Also, this is one of the new controllers, and I'm not sure whether the mappings work with that new one. I assume so considering that there are two distinct entries.
Thank you.
EDIT:
I double checked and the PS4 controller works fine as a joystick, but it isn't recognized as a controller, which means that the mapping is incorrect or non-existent. This may be because my controller is "version 2" of the dualshock 4, and I'm not sure whether a 2.0.5-compatible mapping was added. hmmm
The controller was recognized as a joystick but not as a controller, meaning that none of the available mappings I could find (in 2.0.5 controller mapping format) corresponded with the controller. Updating from SDL 2.0.5 to 2.0.8 also updated available mappings it seems, and now the controller is recognized as a game controller.
Note: normally it is a terrible idea to upgrade tools mid-project, but in this case it was safe to do.
I want to stream a preview video from my image sensor to my pc.
Later I want to add custom filters.
First I used Amcap to get a preview video. It works fine.
However I want my project to be baased on playcap (not as complicated as amcap).
When I start playcap it detects a device, however shows just a black screen.
I haven't modified the code in both examples.
Does anyone know how to fix this problem?
Or perhaps does anyone can describe how I can add a custom filter to amcap.
What does the Samplecgb part do in amcap?
King Regards,
afo
I'm going to try to distill the steps to creating a capture graph bellow. This is a complex process and usually there's multiple ways to accomplish most of the steps so you'll have to do your own research from here on and ask specific questions.
In the code snippets bellow I'll use _com_ptr_t smart pointers, which are defined using the _COM_SMARTPTR_TYPEDEF(IGraphBuilder, __uuidof(IGraphBuilder)) macro. So, do define the IGraphBuilderPtr you'd do something like this:
_COM_SMARTPTR_TYPEDEF(IGraphBuilder, __uuidof(IGraphBuilder))
// which defines IGraphBuilderPtr
You'll always need to have a graph so the first step is pretty universal:
IGraphBuilderPtr graph;
graph.CreateInstance(CLSID_FilterGraph);
Then, if you're building a capture graph, you'll most likely want to use the following interface:
ICaptureGraphBuilder2Ptr cg;
cg.CreateInstance(CLSID_CaptureGraphBuilder2);
cg->SetFiltergraph(graph);
After that, you'll need to add one or more input sources to the graph. You'll need to find the filter that wraps your image sensor as a video capture device and add that to the graph.
This is going to be a multi-step process which will likely involve doing something like this:
Enumerate all video capture devices:
IBaseFilterPtr fVideoInput; // will hold the video input source
ICreateDevEnumPtr pCreate(CLSID_SystemDeviceEnum);
IEnumMonikerPtr pEnum;
HRESULT hr = pCreate->CreateClassEnumerator(CLSID_VideoInputDeviceCategory,
&pEnum,
0);
if (hr == S_OK)
{
IMonikerPtr pMon;
while (pEnum->Next(1, &pMon, NULL) == S_OK)
{
// inspect the moniker of each devices to determine if it's the one you're after
// if it's the right device, then..
HRESULT hr = mon->BindToObject(NULL,
NULL,
__uuidof(IBaseFilter),
(void**)&fVideoInput);
}
}
Once you have the interface to the video input filter, you'll have to enumerate it's pins and find the correct output pin to connect the rest of the graph to. This can be as simple as enumerating all pins and picking the first output pin (if there is only one), or enumerating all pins and querying the media types of each output pin and selecting the right one that way, or, if you know the name of the pin (and it's always the same) calling FindPin. Below is an example of enumerating the pins and selecting the first output one:
IEnumPinsPtr pEnum;
fVideoInput->EnumPins(&pEnum);
IPinPtr pin;
while (pEnum->Next(1, &pin, NULL) == S_OK)
{
PIN_DIRECTION dir;
pin->QueryDirection(&dir);
if (dir == PINDIR_OUTPUT)
{
// this is the first output pin
break;
}
}
Once you have the output pin, you may insert another filter, find the appropriate pin (similar to above but looking for input pins) and then connect the two pins directly using:
// assuming pinOut is an output pin
// and pinIn is an input pin, this method will try to connect them
HRESULT hr = graph->Connect(pinOut, pinIn);
Or, you may try to simply render the pin:
hr = graph->Render(pinOut);
And here's an example of inserting a custom filter in the graph. If the filter is already registered in the system (it shows up in the list in GraphEdit) then all you need to know is the class id of the filter. This is a GUID that uniquely identifies the filter, and if you don't already know it, you can find it using GraphEdit (create new graph, insert the custom filter, right-click and see properties, there should be the class id of the filter):
IBaseFilterPtr fCustomFilter;
fCustomFilter.CreateInstance(__uuidof(CLSID_OF_YOUR_CUSTOM_FILTER));
graph->AddFilter(fCustomFilter, L"Custom Filter Name");
Then, you can proceed in a similar maner as above and find a suitable input pin for the filter, and a suitable output pin, and connect them as you see fit.
Finally, this is entirely optional, and only useful for debugging (so don't use in production), you can register the graph with the ROT to make it possible to study the final graph in a tool such as GraphEdit (or GraphStudioNext or the like).
Sample code take from MSDN for AddToROT and RemoveFromRot:
DWORD dwRegistration = 0;
HRESULT hr = AddToRot(graph, &dwRegistration);
// hold on to dwRegistration, and use it later when you tear down the graph
// to unregister it using the RemoveFromRot(&dwRegister) method
I have a program that records video from a web camera. It shows the camera view in the form. When start button clicked it should start recording video and should be stopped after pressing stop button. Program compiles fine but no video is recorded. Can anyone say what is the wrong with it?
Here is my code.
{
camera = new QCamera(this);
viewFinder = new QCameraViewfinder(this);
camera->setViewfinder(viewFinder);
recorder = new QMediaRecorder(camera,this);
QBoxLayout *layout = new QVBoxLayout;
layout->addWidget(viewFinder);
ui->widget->setLayout(layout);
QVideoEncoderSettings settings = recorder->videoSettings();
settings.setResolution(640,480);
settings.setQuality(QMultimedia::VeryHighQuality);
settings.setFrameRate(30.0);
//settings.setCodec("video/mp4");
recorder->setVideoSettings(settings);
recorder->setContainerFormat("mp4");
camera->setCaptureMode(QCamera::CaptureVideo);
camera->start();
}
void usbrecorder::on_btn_Record_clicked()
{
usbrecorder::startRecording();
}
void usbrecorder::on_btn_Stop_clicked()
{
usbrecorder::stopRecording();
}
void usbrecorder::startRecording()
{
recorder->setOutputLocation(QUrl::fromLocalFile("C:\\Users\\Stranger\\Downloads\\Video\\vidoe_001.mp4"));
recorder->record();
}
void usbrecorder::stopRecording()
{
recorder->stop();
}
This is due to limitations on Windows.
As mentioned in Qt documentation here: https://doc.qt.io/qt-5/qtmultimedia-windows.html#limitations
Video recording is currently not supported. Additionally, the DirectShow plugin does not support any low-level video functionality such as monitoring video frames being played or recorded using QVideoProbe or related classes.
You need to specify an output location:
QMediaRecorder::setOutputLocation(const QUrl& location)
e.g.
setOutputLocation(QUrl("file:///home/user/vid.mp4"));
Try to print the state, status and error message:
qDebug()<<record.state();
qDebug()<<record.status();
qDebug()<<record.error();
and see what it prints. With those messages you can have a clear picture of your problem. Maybe QMediaRecorder cannot access your camera.
This occurs using a few apks that make use of the camera (e.g., zxing, opencv). It displays a glitched image in the preview but it is still a function of what the camera sees so it appears to be an encoding mismatch. The native camera preview works fine, so the internal apps do not exhibit this problem.
For now, please try adding the following workaround after you acquire the Camera but before you setup and start the preview:
Camera.Parameters params = camera.getParameters();
params.setPreviewFpsRange(30000, 30000);
camera.setParameters(params);
(Or just add the setPreviewFpsRange call to your existing parameters if you're setting others as well.)
For anyone using ZXing on their Glass, you can build a version from the source code with the above fix.
Add the following method into CameraConfigurationManager.java
public void googleGlassXE10WorkAround(Camera mCamera) {
Camera.Parameters params = mCamera.getParameters();
params.setPreviewFpsRange(30000, 30000);
params.setPreviewSize(640,360);
mCamera.setParameters(params);
}
And call this method immediately after anywhere you see Camera.setParameters() in the ZXing code. I just put it in two places in the CameraConfigurationManager and it worked.
I set the Preview Size to be 640x360 to match the Glass resolution.
30 FPS preview is pretty high. If you want to save some battery and CPU, consider the slowest supported FPS to be sufficient:
List<int[]> supportedPreviewFpsRanges = parameters.getSupportedPreviewFpsRange();
int[] minimumPreviewFpsRange = supportedPreviewFpsRanges.get(0);
parameters.setPreviewFpsRange(minimumPreviewFpsRange[0], minimumPreviewFpsRange[1]);
The bug still exists as of XE16 and XE16.11 but this code gets past the glitch and shows a normal camera preview, note the three parameter setting lines and their values. I have also tested this at 5000 (5FPS) and it works, and at 60000 (60FPS) and it does not work:
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
if (mCamera == null) return;
Camera.Parameters camParameters = mCamera.getParameters();
camParameters.setPreviewFpsRange(30000, 30000);
camParameters.setPreviewSize(1920, 1080);
camParameters.setPictureSize(2592, 1944);
mCamera.setParameters(camParameters);
try {
mCamera.startPreview();
} catch (Exception e) {
mCamera.release();
mCamera = null;
}
}
This still is an issue as of XE22 (!) Lowering the frames per second to 30 or lower does the trick:
parameters.setPreviewFpsRange(30000, 30000);
And indeed, don't forget to set the parameters:
camera.setParameters(parameters);
I have found no clear explanation as to why this causes trouble, since 60 fps is included in the supported fps range. The video can record 720p, but I never saw a source add the fps to this.
You can set the params.setPreviewSize(1200,800). You can change the values around this range until you can clear color noise.