Im trying to rewrite this piece of code to Vala:
gstreamer example
I got stuck at this line:
watch_id = gst_bus_add_watch (bus, message_handler, NULL);
My vala equivalent:
var watch_id = bus.add_watch (Priority.DEFAULT, message_handler);
I haven't got a clue how to format the BusFunc and it's supposed arguments
BusFunc
Complete code so far:
using Gst;
bool Gst.BusFunc message_handler ()
{
return false;
}
void main (string[] args) {
// Initializing GStreamer
Gst.init (ref args);
var caps = Caps.from_string("audio/x-raw,channels=2");
// Creating pipeline and elements
var pipeline = new Pipeline ("my_pipeline");
var bin = new Bin ("my_bin");
var bus = new Bus ();
var src = ElementFactory.make ("autoaudiosrc", "my_src");
var sink = ElementFactory.make ("autoaudiosink", "my_sink");
var convert = ElementFactory.make ("audioconvert", "my_convert");
var level = ElementFactory.make ("level", "my_level");
var fakesink = ElementFactory.make ("fakesink", "my_fakesink");
// Adding elements to pipeline
//pipeline.add_many (src, sink);
bin.add_many (pipeline, src, convert, level, fakesink);
src.link(convert);
convert.link_filtered (level, caps);
level.link(fakesink);
level.set ("post-messages", true);
fakesink.set ("sync", true);
bus = pipeline.get_bus ();
var watch_id = bus.add_watch (Priority.DEFAULT, message_handler);
// Linking source to sink
src.link (sink);
// Set pipeline state to PLAYING
pipeline.set_state (State.PLAYING);
Thanks in advance!
You're almost there. A delegate identifies the function signature: its parameter types and return type. The BusFunc type has the signature: public delegate bool BusFunc (Bus bus, Message message) so your handler will be something like:
bool message_handler (Bus my_bus, Message my_message)
{
print (#"Message type: $(my_message.type.get_name ())\n");
return true;
}
It returns true in this example to keep the handler.
This example is not tested, but should give you the right idea to move forward.
Related
// Import
import { ApiPromise, WsProvider } from "#polkadot/api";
// Construct
/*
https://rpc.kulupu.network
https://rpc.kulupu.network/ws
https://rpc.kulupu.corepaper.org
https://rpc.kulupu.corepaper.org/ws
*/
(async () => {
//const wsProvider = new WsProvider('wss://rpc.polkadot.io');
const wsProvider = new WsProvider("wss://rpc.kulupu.network/ws");
const api = await ApiPromise.create({ provider: wsProvider });
// Do something
const chain = await api.rpc.system.chain();
console.log(`You are connected to ${chain} !`);
console.log(await api.query.difficulty.pastDifficultiesAndTimestamps.toJSON());
console.log(api.genesisHash.toHex());
})();
The storage item pastDifficultiesAndTimestamps only holds the last 60 blocks worth of data. For getting that information you just need to fix the following:
console.log(await api.query.difficulty.pastDifficultiesAndTimestamps());
If you want to query the difficulty of a blocks in general, a loop like this will work:
let best_block = await api.derive.chain.bestNumber()
// Could be 0, but that is a lot of queries...
let first_block = best_block - 100;
for (let block = first_block; block < best_block; block++) {
let block_hash = await api.rpc.chain.getBlockHash(block);
let difficulty = await api.query.difficulty.currentDifficulty.at(block_hash);
console.log(block, difficulty)
}
Note that this requires an archive node which has informaiton about all the blocks. Otherwise, by default, a node only stores ~256 previous blocks before state pruning cleans things up.
If you want to see how to make a query like this, but much more efficiently, look at my blog post here:
https://www.shawntabrizi.com/substrate/porting-web3-js-to-polkadot-js/
HiI need to stream a video file and save it using LIBVLC. Here is what I have done so far:
libvlc_media_t* vlcMedia = nullptr;
libvlc_instance_t* vlcInstance = libvlc_new(0, nullptr);
vlcMedia = libvlc_media_new_location(vlcInstance, aUri);
if(nullptr != vlcMedia)
{
libvlc_media_player_t* vlcMediaPlayer = libvlc_media_player_new_from_media(vlcMedia);
if(nullptr != vlcMediaPlayer)
{
libvlc_media_release(vlcMedia);
libvlc_event_manager_t* vlcMediaManager = libvlc_media_player_event_manager(vlcMediaPlayer);
if(nullptr != vlcMediaManager)
libvlc_event_attach(vlcMediaManager, libvlc_MediaPlayerEndReached, OnStopped, this);
libvlc_media_player_set_hwnd(vlcMediaPlayer, Handle);
libvlc_media_player_play(vlcMediaPlayer);
}
}
This will connect to the remote media and starts playing the video. The question is how do I direct it to save the video? I could not find the API call for that.
Thank youSam
Thanks to #mtz the solution is to add:
libvlc_media_add_option(vlcMedia,":sout=#duplicate{dst=display,dst=std{access=file,mux=mp4,dst=xyz.mp4}");
after the call to libvlc_media_new_location.
Here's a C# version that you can easily adapt to C/C++
var currentDirectory = Path.GetDirectoryName(Assembly.GetEntryAssembly().Location);
var destination = Path.Combine(currentDirectory, "record.ts");
// Load native libvlc library
Core.Initialize();
using (var libvlc = new LibVLC())
using (var mediaPlayer = new MediaPlayer(libvlc))
{
// Redirect log output to the console
libvlc.Log += (sender, e) => Console.WriteLine($"[{e.Level}] {e.Module}:{e.Message}");
// Create new media with HLS link
var media = new Media(libvlc, "http://hls1.addictradio.net/addictrock_aac_hls/playlist.m3u8", FromType.FromLocation);
// Define stream output options.
// In this case stream to a file with the given path and play locally the stream while streaming it.
media.AddOption(":sout=#file{dst=" + destination + "}");
media.AddOption(":sout-keep");
// Start recording
mediaPlayer.Play(media);
Console.WriteLine($"Recording in {destination}");
Console.WriteLine("Press any key to exit");
Console.ReadKey();
}
I am trying to query a list of available video capture devices (webcams) on windows using gstreamer 1.0 in c++.
I am using ksvideosrc as source and i am able to capture the video input but i can't query a list of available devices (and their caps).
On gstreamer 0.10 it has been possible through GstPropertyProbe which is removed in gstreamer 1.0. The documentation suggests using GstDeviceMonitor. But i have no luck using that either.
Has anyone succeeded in acquiring a list of device names? Or can you suggests another way of retrieving the available device names and their caps?
You can use GstDeviceMonitor and gst_device_monitor_get_devices () function.
First initialize GstDeviceMonitor by gst_device_monitor_new().
Second start the monitor by gst_device_monitor_start(pMonitor).
Third, get devices list by gst_device_monitor_get_devices(pMonitor).
Code would be like this:
GstDeviceMonitor* monitor= gst_device_monitor_new();
if(!gst_device_monitor_start(monitor)){
printf("WARNING: Monitor couldn't started!!\n");
}
GList* devices = gst_device_monitor_get_devices(monitor);
My references:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstDeviceMonitor.html#gst-device-monitor-get-devices
Although I haven't figured out how to enumerate the device names, I've come up with a workaround to at least get the available ksvideosrc device indexes. Below is the code in Python, but you should be able to port it to C++ fairly easily, thanks to the GObject introspection bindings.
from gi.repository import Gst
def get_ksvideosrc_device_indexes():
device_index = 0
video_src = Gst.ElementFactory.make('ksvideosrc')
state_change_code = None
while True:
video_src.set_state(Gst.State.NULL)
video_src.set_property('device-index', device_index)
state_change_code = video_src.set_state(Gst.State.READY)
if state_change_code != Gst.StateChangeReturn.SUCCESS:
video_src.set_state(Gst.State.NULL)
break
device_index += 1
return range(device_index)
if __name__ == '__main__':
Gst.init()
print get_ksvideosrc_device_indexes()
Note that the video source device-name property is None as of GStreamer version 1.4.5.0 on Windows for the ksvideosrc.
It's very late, but for the future...
The Gst.DeviceMonitor can be used to enumerate devices, and register an addition or removal of a device.
Here's how to get device names in C# with GStreamer 1.14
static class Devices
{
public static void Run(string[] args)
{
Application.Init(ref args);
GtkSharp.GstreamerSharp.ObjectManager.Initialize();
var devmon = new DeviceMonitor();
// to show only cameras
// var caps = new Caps("video/x-raw");
// var filtId = devmon.AddFilter("Video/Source", caps);
var bus = devmon.Bus;
bus.AddWatch(OnBusMessage);
if (!devmon.Start())
{
"Device monitor cannot start".PrintErr();
return;
}
Console.WriteLine("Video devices count = " + devmon.Devices.Length);
foreach (var dev in devmon.Devices)
DumpDevice(dev);
var loop = new GLib.MainLoop();
loop.Run();
}
static void DumpDevice(Device d)
{
Console.WriteLine($"{d.DeviceClass} : {d.DisplayName} : {d.Name} ");
}
static bool OnBusMessage(Bus bus, Message message)
{
switch (message.Type)
{
case MessageType.DeviceAdded:
{
var dev = message.ParseDeviceAdded();
Console.WriteLine("Device added: ");
DumpDevice(dev);
break;
}
case MessageType.DeviceRemoved:
{
var dev = message.ParseDeviceRemoved();
Console.WriteLine("Device removed: ");
DumpDevice(dev);
break;
}
}
return true;
}
}
I am new to WinRT c++. I am trying to pass an StorageFile image from C# and open the file and set it as source in BitmapImage in WinRT to extract height and width of image. I am using the following code.
auto openOperation = StorageImageFile->OpenAsync(FileAccessMode::Read); // from http://msdn.microsoft.com/en-us/library/windows/desktop/hh780393%28v=vs.85%29.aspx
openOperation->Completed = ref new
AsyncOperationCompletedHandler<IRandomAccessStream^>(
[=](IAsyncOperation<IRandomAccessStream^> ^operation, AsyncStatus status)
{
auto Imagestream = operation->GetResults();
BitmapImage^ bmp = ref new BitmapImage();
auto bmpOp = bmp->SetSourceAsync(Imagestream);
bmpOp->Completed = ref new
AsyncActionCompletedHandler (
[=](IAsyncAction^ action, AsyncStatus status)
{
action->GetResults();
UINT32 imageWidth = (UINT32)bmp->PixelWidth;
UINT32 imageHeight = (UINT32)bmp->PixelHeight;
});
});
This code does not seem to work. after the line BitmapImage^ bmp = ref new BitmapImage(); the debugger stops saying no source code is found.
Can you help me write the correct code?
I think you meant to write openOperation->Completed += ref new... and bmpOp->Completed += ref new.... I'm not an expert in C++, but from what I have seen - async operations are typically wrapped in create_task calls. Not really sure why - maybe to avoid subscribing to events without unsubscribing?
I think it should look roughly like this:
auto bmp = ref new BitmapImage();
create_task(storageImageFile->OpenAsync(FileAccessMode::Read)) // get the stream
.then([bmp](IRandomAccessStream^ ^stream) // continuation lambda
{
return create_task(bmp->SetSourceAsync(stream)); // needs to run on ASTA/Dispatcher thread
}, task_continuation_context::use_current()) // run on ASTA/Dispatcher thread
.then([bmp]() // continuation lambda
{
UINT32 imageWidth = (UINT32)bmp->PixelWidth; // needs to run on ASTA/Dispatcher thread
UINT32 imageHeight = (UINT32)bmp->PixelHeight; // needs to run on ASTA/Dispatcher thread
// TODO: use imageWidth and imageHeight
}, task_continuation_context::use_current()); // run on ASTA/Dispatcher thread
I am trying to implement a screenshot functionality in a WinRT app that shows Video via a MediaElement. I have the following code, it saves a screenshot that's the size of the MediaElement but the image is empty (completely black). Tried with various types of Media files. If I do a Win Key + Vol Down on Surface RT, the screen shot includes the Media frame content, but if I use the following code, it's blackness all around :(
private async Task SaveCurrentFrame()
{
RenderTargetBitmap renderTargetBitmap = new RenderTargetBitmap();
await renderTargetBitmap.RenderAsync(Player);
var pixelBuffer = await renderTargetBitmap.GetPixelsAsync();
MultimediaItem currentItem = (MultimediaItem)this.DefaultViewModel["Group"];
StorageFolder currentFolder = Windows.Storage.ApplicationData.Current.LocalFolder;
var saveFile = await currentFolder.CreateFileAsync(currentItem.UniqueId + ".png", CreationCollisionOption.ReplaceExisting);
if (saveFile == null)
return;
// Encode the image to the selected file on disk
using (var fileStream = await saveFile.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, fileStream);
encoder.SetPixelData(
BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Ignore,
(uint)renderTargetBitmap.PixelWidth,
(uint)renderTargetBitmap.PixelHeight,
DisplayInformation.GetForCurrentView().LogicalDpi,
DisplayInformation.GetForCurrentView().LogicalDpi,
pixelBuffer.ToArray());
await encoder.FlushAsync();
}
}
Here MultimediaItem is my View Model class that among other things has a UniqueId property that's a string.
'Player' is the name of the Media Element.
Is there anything wrong with the code or this approach is wrong and I've to get in the trenches with C++?
P.S. I am interested in the WinRT API only.
Update 1 Looks like RenderTargetBitmap doesn't support this, the MSDN documentation clarifies it http://msdn.microsoft.com/en-us/library/windows/apps/windows.ui.xaml.media.imaging.rendertargetbitmap .
I'll appreciate any pointers on how to do it using DirectX C++. This is a major task for me so I'll crack this one way or the other and report back with the solution.
Yes, it is possible - little bit tricky, but working well.
You dont use mediaElement, but StorageFile itself.
You need to create writableBitmap with help of Windows.Media.Editing namespace.
Works in UWP (Windows 10)
This is complete example with file picking and getting video resolution and saving image to Picture Library
TimeSpan timeOfFrame = new TimeSpan(0, 0, 1);//one sec
//pick mp4 file
var picker = new Windows.Storage.Pickers.FileOpenPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
picker.FileTypeFilter.Add(".mp4");
StorageFile pickedFile = await picker.PickSingleFileAsync();
if (pickedFile == null)
{
return;
}
///
//Get video resolution
List<string> encodingPropertiesToRetrieve = new List<string>();
encodingPropertiesToRetrieve.Add("System.Video.FrameHeight");
encodingPropertiesToRetrieve.Add("System.Video.FrameWidth");
IDictionary<string, object> encodingProperties = await pickedFile.Properties.RetrievePropertiesAsync(encodingPropertiesToRetrieve);
uint frameHeight = (uint)encodingProperties["System.Video.FrameHeight"];
uint frameWidth = (uint)encodingProperties["System.Video.FrameWidth"];
///
//Use Windows.Media.Editing to get ImageStream
var clip = await MediaClip.CreateFromFileAsync(pickedFile);
var composition = new MediaComposition();
composition.Clips.Add(clip);
var imageStream = await composition.GetThumbnailAsync(timeOfFrame, (int)frameWidth, (int)frameHeight, VideoFramePrecision.NearestFrame);
///
//generate bitmap
var writableBitmap = new WriteableBitmap((int)frameWidth, (int)frameHeight);
writableBitmap.SetSource(imageStream);
//generate some random name for file in PicturesLibrary
var saveAsTarget = await KnownFolders.PicturesLibrary.CreateFileAsync("IMG" + Guid.NewGuid().ToString().Substring(0, 4) + ".jpg");
//get stream from bitmap
Stream stream = writableBitmap.PixelBuffer.AsStream();
byte[] pixels = new byte[(uint)stream.Length];
await stream.ReadAsync(pixels, 0, pixels.Length);
using (var writeStream = await saveAsTarget.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, writeStream);
encoder.SetPixelData(
BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied,
(uint)writableBitmap.PixelWidth,
(uint)writableBitmap.PixelHeight,
96,
96,
pixels);
await encoder.FlushAsync();
using (var outputStream = writeStream.GetOutputStreamAt(0))
{
await outputStream.FlushAsync();
}
}
Yeah...I spent lot of hours by this
Ok I have managed to get making snapshot from MediaElement on button press to work.
I am passing MediaStreamSource object to MediaElement using SetMediaStreamSource method. MediaStreamSource has event SampleRequested which is fired basicly everytime new frame is drawn. Then using boolean I control when to create bitmap
private async void MediaStream_SampleRequested(MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args)
{
if (!takeSnapshot)
{
return;
}
takeSnapshot = false;
Task.Run(() => DecodeAndSaveVideoFrame(args.Request.Sample));
}
After that what is left is to decode compressed image and convert it to WriteableBitmap. The image is (or at least was in my case) in YUV fromat. You can get the byte array using
byte[] yvuArray = sample.Buffer.ToArray();
and then get data from this array and convert it to RGB. Unfortunetly I cannot post entire code but I'm gonna give you a few more hints:
YUV to RGB wiki here you have wiki describing how does YUV to RGB conversion works.
Here I found python project which solution I have adapted (and works perfectly). To be more precise you have to analize how method NV12Converter works.
The last thing is to change takeSnapshot boolean to true after pressing button or doing other activity :).