Create a WebRTC VideoTrack with a “custom” Capturer using C++ - c++

Is there a way (or hack) to let me use a "custom" video capturer to create a VideoTrack and provide frames to it ?
The classic way to build a VideoTrack is :
Get a VideoCapturer Instance :
std::unique_ptr<cricket::VideoCapturer> capturer;
Create a VideoSource with a provided capturer :
rtc::scoped_refptr<webrtc::VideoTrackSourceInterface> videoSource = peer_connection_factory_->CreateVideoSource(std::move(capturer), NULL);
Create a VideoTrack using the VideoSource :
rtc::scoped_refptr<webrtc::VideoTrackInterface> video_track;
video_track = peer_connection_factory_->CreateVideoTrack(kVideoLabel, videoSource);
I was wondering if there is a way to override step one, instead of using the native one, using a custom capturer, so that i can provide the frames to the video track using a callback. That will let me use any video source (file, yuv stream...) and be very flexible.
Any advice on this one ?
This question is a C++ reference to : Create a WebRTC VideoTrack with a “custom” Capturer on Android with libjingle

I finally found a way to make my own native C++ Video Capture. Basically you have to override some functions from webrtc::I420BufferInterface and cricket::VideoCapturer.
If someone wants any further explanations please feel free to ask.

Related

OMNeT ++ direct message transmission visualizations in 3D

I am new to OMNeT++ and I'm trying to implement a drone network that communicate with each other using direct messages.
I want to visualize my drone network with the 3D visualization in OMNeT using the OsgVisualizer in inet.visualizer.scene package.
In the dronenetwork.ned file, I have used the IntegratedVisualizer and the OsgGeographicCoordinateSystem. Then in the omnetpp.ini file, the map file to be used is defined and so the map loading and mobility of the drones works fine in the 3D visualization of the simulation run.
However, the message transmissions between the drones are not visualized in 3D even though this is properly visualized in the 2D canvas mode.
I tried adding both NetworkNodeOsgVisualizer and NetworkConnectionOsgVisualizer to my drone module as visualization simple modules and also I have defined the drones as a #networkNode and #networkConnectionNode. But it still hasn't been able to visualize the message transmissions.
Any help or hint regarding this would be highly appreciated.
Code used for visualizations in the simple module drone is as follows
import inet.visualizer.scene.NetworkNodeOsgVisualizer;
import inet.visualizer.scene.NetworkConnectionOsgVisualizer;
module drone
{
parameters:
#networkNode;
#networkConnection;
submodules:
networkNodeOsgVisualizer: NetworkNodeOsgVisualizer {
#display("p=207,50");
displayModuleName = true;
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
}
networkConnectionOsgVisualizer : NetworkConnectionOsgVisualizer{
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
displayNetworkConnections = true;
}
Thank you
Message passing and direct message sending visualizations are special cases implemented by the Qtenv automatically for 2D (default) visualization only. You can add custom 2D message visualization (like the one in the aloha example). OMNeT++ does not provide any 3D visualization by default. All the code must be provided by the model (INET in this case). This is also true for any transient visualization. There is an example for this in the osg-earth omnet example where communication between cows are visualized by inflating bubbles.
So, you have to implement your own visualization effect. There is something in INET which is pretty close to what you want: DataLinkOsgVisualizer and PhysicalLinkOsgVisualizer which flashes an arrow if communication on data link or physical layer has occurred. This is not the same as message passing, but close enough. Or you can implement your own animation using these visualizers as a sample.

Cast video to TV using MediaPlayerElement

I have the following XAML
<MediaPlayerElement x:Name="EmbeddedPlayer" AreTransportControlsEnabled="True" HorizontalAlignment="Stretch" IsDoubleTapEnabled="True" DoubleTapped="OnEmbeddedPlayerDoubleTapped"> </MediaPlayerElement>
According to this official documentation, I should be able to use the available Cast button to cast the video to my TV. The Movies & TV app can does that: When I clicked the cast button in that app, it lists the available targets. But when I do the same thing for my app, it asks me to make sure that the devices are discoverable and no progress ring indicating device searching/discovery was going on. (I am on a Lumia 635.) Once again, I feel the frustration of mismatching between documentation and reality!
Is there a complete working example for video/audio casting?
EDIT: I added the simplified code following the third method for device discovery given in the article:
using namespace Windows::Devices::Enumeration;
MainPage::MainPage()
{
// Other set up
DeviceWatcher ^deviceWatcher;
CastingConnection ^castingConnection;
//Create our watcher and have it find casting devices capable of video casting
deviceWatcher = DeviceInformation::CreateWatcher(CastingDevice::GetDeviceSelector(CastingPlaybackTypes::Video));
//Register for watcher events
deviceWatcher->Added += ref new TypedEventHandler<DeviceWatcher^, DeviceInformation^>(this, &MainPage::DeviceWatcher_Added);
deviceWatcher->Start();
}
void MainPage::DeviceWatcher_Added(Windows::Devices::Enumeration::DeviceWatcher^ sender, Windows::Devices::Enumeration::DeviceInformation^ args)
{
Dispatcher->RunAsync(Windows::UI::Core::CoreDispatcherPriority::Normal, ref new DispatchedHandler([args]()
{
//Add each discovered device to our listbox
create_task(CastingDevice::FromIdAsync(args->Id)).then([](CastingDevice^ addedDevice)
{
OutputDebugString(("Found cast device " + addedDevice->FriendlyName + "\n")->Data());
}, task_continuation_context::use_current());
//castingDevicesListBox.Items.Add(addedDevice);
}));
}
As I anticipated, there is no device discovered. There might probably be some extra steps to take care of permission (allow app to discover & cast to devices) etc. that are never specified in the documentation.
Contrary to the documentation, one MUST NOT use MediaPlayerElement but the deprecated MediaElement for casting on mobile. This hint is taken from https://social.msdn.microsoft.com/Forums/en-US/0c37a74f-1331-4fb8-bfdf-3df11d953098/uwp-mediaplayerelement-mediacasting-is-broken-?forum=wpdevelop
This also solves a problem I previously asked about showing video in fullscreen: Fullscreen works like a charm for MediaElement on mobile; but not the supposely upgraded MediaPlayerElement.
I should have realized this obvious fact given that Microsoft already abandoned Windows 10 Mobile.

Cocos2dx using c++ and lua

I supposed to using the cocos2dx + C++ to do the most job in application, and use lua to some ui part job.
And here is a question:
In bool AppDelegate::applicationDidFinishLaunching(): I using this codes to config the design screen size
glview->setDesignResolutionSize(designResolutionSize.width, designResolutionSize.height, ResolutionPolicy::NO_BORDER);
But what will happen if I change the config in config.lua.
CONFIG_SCREEN_ORIENTATION = "landscape"
-- design resolution
CONFIG_SCREEN_WIDTH = 960
CONFIG_SCREEN_HEIGHT = 640
-- auto scale mode
CONFIG_SCREEN_AUTOSCALE = "FIXED_HEIGHT"
Is there any solution can resolve the config issue if I am using c++ and lua?
Finally, after some test, I found a way to resolve it.
The c++ codes only work in the c++ enviroment, and the lua config only works in the lua enviroment, so we have to let the properties of them be same.

WinRT API WIndows::System::Launcher::LaunchFileAsync() usage from C++

I'm trying to launch an image using WinRT API WIndows::System::Launcher::LaunchFileAsync().
Code snippet is as follows:
RoInitialize(RO_INIT_MULTITHREADED);
String^ imagePath = ref new String(L"C:\\Users\\GoodMan\\Pictures\\wood.png");
auto file = Storage::StorageFile::GetFileFromPathAsync(imagePath);
Windows::System::Launcher::LaunchFileAsync(file);
I'm getting this error from the LaunchFileAsync() API:
error C2665: 'Windows::System::Launcher::LaunchFileAsync' : none of
the 2 overloads could convert all the argument types
Can I please get help how to solve this. I'm very new to WinRT C++ coding .
The method GetFileFromPathAsync does not return a StorageFile, but it returns IAsyncOperation<StorageFile>^. What you have to do is convert the latter to the former, as follows:
using namespace concurrency;
String^ imagePath = ref new String(L"C:\\Users\\GoodMan\\Pictures\\wood.png");
auto task = create_task(Windows::Storage::StorageFile::GetFileFromPathAsync(imagePath));
task.then([this](Windows::Storage::StorageFile^ file)
{
Windows::System::Launcher::LaunchFileAsync(file);
});
Generally all Windows Store app framework methods that end in Async will return either an IAsyncOperation, or a task. These methods are what are known as asynchronous methods, and require some special handling. See this article for more info: Asynchronous programming in C++ .
So now everything is great, correct? Well, not quite. There is another issue with your code. It is that when you run the code above, you will get an access-denied error. The reason is that Windows Store Apps are sandboxed, and you cannot generally access just any file on the filesystem.
You are in luck, though, because you are trying to access a file in your Pictures folder. The Pictures folder is a special folder that Windows Store apps have access to. You can get at it using the KnownFolders class:
using namespace concurrency;
Windows::Storage::StorageFolder^ pictures =
Windows::Storage::KnownFolders::PicturesLibrary;
auto task = create_task(pictures->GetFileAsync("wood.png"));
task.then([this](Windows::Storage::StorageFile^ file)
{
Windows::System::Launcher::LaunchFileAsync(file);
});
Note that in order to access the Pictures folder your application has to declare it in the project manifest. To do so, double click on the Package.appmanifest file in the project "tree" in Visual Studio, and select the Capabilities tab. Then under Capabilities, check Pictures Library.

CLSID for x264 DirectShow filter

I have used x264 DirectShow filter from Monogram for decoding h264 avc video , i need to create intrance and add filter to graph in directshow , i checked the CLSID of it from grapthEdit , and that is 'x264' , i guess that to create instance we need the GUID for that filter , i have no clue how can i create filter instance using 'x264' value.
I am using directshow with vc++
Can any body have idea on this???
As this filter is open source, you only need to watch in the right headers. You just need to copy CLSID_MonogramX264 from here and create the filter with CoCreateInstance.
You can use Monogram Graph Studio to see an CLSID, as I remember than I checked it last time all was OK.