I'm trying to display a video stream on a specific screen.
Right now I use the waylandsink that has display and fullscreen properties so I have:
gst-launch-1.0 videotestsrc ! waylandsink display=wayland-0 fullscreen=TRUE
It works fine.
Then I check my display list using xrandr --listmonitors and I get:
Monitors: 2
+*XWAYLAND0 1920/508x1080/286+0+0 XWAYLAND0
+XWAYLAND1 1920/508x1080/286+1920+0 XWAYLAND1
So I tried to replace wayland-0 by wayland-1 but the pipeline stops.
I'm not sure if my display name is correct or how I should obtain it (as for now I took wayland-0 and simply incremented it). Or if it is possible to do that using waylandsink
Edit:
I did a lot more research (but still not enough). First I became aware that waylandsink may not be what I'm looking for. Second is I didn't understand how rendering works in Linux (and it's still not quite clear).
But I found:
kmssink : was not able to make it work
dfbvideosink : was not installed
fbdevsink : Does not get 2D/3D hardware acceleration; Works fine but I have some problems (like not having another framebuffer for another display)
glimagesink : Did not find a way to specify a display to render
I'll keep searching...
I finally found! And it's the kmssink but I'll explain more why.
First I talked about the waylandsink. The thing is I did not know was wayland is a protocol and it seems to only work for displaying content inside a Desktop Environment (DE). So I guess you could create windows for each display and then link your sink to those windows. But I was looking for a way to display without any DE so waylandsink is a not an option.
For glimagesink from what I tried it's also in a DE so I did not explore more about it.
Then there's the framebuffer using the fbdevsink. It works without DE, but suffers from limitations... There seems to be only one framebuffer situated in /dev/fb0 and what we draw in it is displayed on every screen regardless of the display resolution. So if we have 2 displays with different resolution we cannot do fullscreen without having some cropping on one of them. Plus we cannot display different video on each screen because the framebuffer is duplicated. Finally while I was testing it, I found out some time there were frames that were drawn at the same time into the framebuffer, so it was causing the video to have some weird visual.
Maybe the issues I listed could have been fixed in some way but there are just too many so I discarded this option
When I checked the documentation for the kmssink it says there are 2 main parameters I was interrested:
bus-id
connector-id
By specifying the bus-id I though I could display on a specific screen. But all the displays use the same bus-id: 0000:00:02.0 so it's not the parameter to specify a display.
Then there is connector-id. It's an integer and it can be used to specify the display. In my case it's 77 for HDMI-A-1 and 92 for HDMI-A-2.
How do you get the connector-id? Well that's not simple...
A command exists to get them and it's called modetest. The thing is it seems to be only included in some embedded devices. I found out that the command was included in the package libdrm but in my case installing it did not give me access to the command...
I'm using GStreamer with Rust so by importing the drm package I was able to get a list of connector-id and a lot of data about displays.
So in the end I can do:
gst-launch-1.0 videotestsrc ! kmssink connector-id=77
or:
gst-launch-1.0 videotestsrc ! kmssink connector-id=92
To display on the screen I want to.
One last thing: kmssink need to be executed as root to work
Related
I'm trying to make an application in tkinter that has a number of buttons you can assign sound on and play it later. The click of the button itself only calls play() method, so loading of the sound is done beforehand.
I tried making some kind of volume control with sliders (tk.Scale) and I noticed there is no noticeable difference between most volume values until I get very close to zero (take into consideration that slider resolution is 0.01 from 0.0 to 0.1).
At around 0.02 I think I notice the sound volume is significantly lower and if I get to zero, the sound is muted. Please note that this happens if I move the slider while no sounds are playing.
The interesting thing is, if I try playing a sound that is long enough to let me move the slider while it's playing, I can notice the difference right away, but if the sound stops playing and I try playing it again, it goes to the "default" volume again.
Since I divided my application into multiple scripts according to what they do (recording sound, playing sound, GUI) I thought it could be the problem that I haven't directly initialized pygame mixer, but rather from the imported module, so I made a new python script and typed this code in:
import pygame
import time
pygame.mixer.pre_init(frequency=44100, size=-16, channels=1, buffer=512)
pygame.mixer.init()
sound1=pygame.mixer.Sound("sound.wav")
sound1.set_volume(1.0)
print sound1.get_volume()
sound1.play()
time.sleep(sound1.get_length())
sound1.set_volume(0.5)
print sound1.get_volume()
sound1.play()
time.sleep(sound1.get_length())
sound1.set_volume(0.08)
print sound1.get_volume()
sound1.play()
time.sleep(sound1.get_length())
The output is the following: 1.0,0.5,0.078125 (one below the other) confirming that the volume has indeed been set (I hope properly).
The only time I can notice the difference is the third case, which is not that noticeable really, I want the volume increase to be linear, this is far from it.
I tried the same thing with a channel:
sound1=pygame.mixer.Sound("sound.wav")
channel=pygame.mixer.find_channel(True)
channel.set_volume(1.0)
channel.play(sound1)
time.sleep(sound1.get_length()/2)
channel.set_volume(0.5)
print "Volume set"
time.sleep(sound1.get_length()/2)
No luck, the same thing happens here too.
I spent all day googling "pygame mixer volume problem" "pygame mixer volume set problem" and similar phrases, but no luck. Hopefully someone here can be of help, considering my diploma depends on a python method. :)
Thanks in advance.
I found the answer (thank you Gummbum from PyGame IRC).
The problem is not in Python or Pygame itself, but rather in Windows. It seems sound enhancements are somehow fiddling with the way the sound my script is playing (or any other Pygame script for that matter).
I'm on Windows 10 and this is how I did it:
Right click on the speaker icon in the taskbar
Select Playback Devices
Select Speakers and Properties
Go to Enhancements tab and uncheck Equalizer and Loudness Equalization
That's it.
Music on Raspberry Pi:
using Pygame to program playing music on my Raspberry Pi, I found the volume way too low at settings 0.0 to 1.0. Then I tried setting the value higher up to 10.0(pygame.mixer.music.set_volume(vol)) and it works Great!
Maybe you need to change the file format to mp3 for running the music because when i copied this code got an alarm music in mp3 extension and ran it in Spyder(anaconda) python 3.8 , it works , There might be 2 solutions :
Change your python version to 3.8
Convert the .wav extension into mp3
I am not sure it would work or not but with these situations , it might work at your end.
First of all, you should know this question is titled that way because that's were I ended up stuck after narrowing down my problem for quite a while. Since there probably are better approaches to my problem I'm also explaining below my problem and what I've been doing to try and solve it. Suggestions on other approaches would be very welcome.
The problem
I'm using a gstreamer port to Android to render videos from remote cameras through the RTSP protocol (UDP is the transport method).
Using playbin things were working quite fine until they didn't anymore for a subset of these cameras.
Unfortunately I don't have access to the cameras themselves since they belong to our company's client, but the first thing that sprung to my mind was that it's got to be a problem with them.
Then, there's another Android app which we're using as reference that is still able to play video from these cameras normally, so I'm now trying to do my best to further investigate the issue on my end (our Android app).
The problem has been quite deterministic: some cameras always fail, others always work. When they fail, sometimes it would be with reason not-linked as the cause.
I managed to dump the pipeline graph associated with each of these cameras when the application tries to play video from them. Then I could notice that for each of the cameras that are failing, the associated pipelines are always missing something. Some miss just the sink element, others miss both the source and the sink:
Dump of pipeline with source only:
Dump of pipeline without a source or a sink:
Dump of pipeline with both (these are the cases where we can indeed play):
These are dumps of pipelines built by the playbin.
Attempted solution
I've been trying to test what would happen if I built the pipeline manually from scratch (so that it's the same being build by the playbin in the third image above) and forced all camera's videos to be processed by this pipeline. Since all cameras used to work, my guess is that somehow negotiation is failing now for some cameras so that the playbin is not building the pipeline properly for these cameras but if I assemble it myself, eventually it all would work as expected (I'm assuming that rtspsrc in combination with glimagesink was also the chosen pipeline by the playbin for playing video from these cameras).
This is how I'm trying to build this pipeline myself:
priv->pipeline = gst_pipeline_new("rtspstreamer");
source = gst_element_factory_make("rtspsrc", NULL);
if (!source) {
GST_DEBUG("Source could not be created");
}
sink = gst_element_factory_make("glimagesink", NULL);
if (!sink) {
GST_DEBUG("Sink could not be created");
}
if (!gst_bin_add(GST_BIN(priv->pipeline), source)) {
GST_DEBUG("Could not add source to pipeline");
}
if (!gst_bin_add(GST_BIN(priv->pipeline), sink)) {
GST_DEBUG("Could not add sink to pipeline");
}
if (!gst_element_link(source, sink)) {
GST_DEBUG("Source and sink could not be linked");
}
g_object_set(source, "location", uri, NULL);
So, running the code above, I get the following error:
Source and sink could not be linked
This is where I'm stuck. How could I investigate further on why these components are unable to link to each other? I think that maybe there should be some other component between them in the pipeline, but I think that's not the case by looking at the dump of the successful pipeline (third image) above.
Thanks in advance for any help.
I have a directshow Video renderer redived from CBaseVideoRenderer. The renderer is used in a graph that receives data from a live source (BDA). It looks like the connections are established properly, but the video rendering immediately ends because there is no sample. However, audio Rendering works, ie I can hear the sound while DoRenderSample of my renderer is never called.
Stepping through the code in the debugger, I found out that in CBaseRenderer::StartStreaming, the stream ends immedately, because the member m_pMediaSample is NULL. If I replace my renderer with the EVR renderer, it shows frames, ie the stream is not ending before the first frame for the EVR renderer, but only for my renderer.
Why is that and how can I fix it? I implemented (following the sample from http://www.codeproject.com/Articles/152317/DirectShow-Filters-Development-Part-Video-Render) what I understand as the basic interface (CheckMediaType, SetMediaType and DoRenderSample), so I do not see any possibility to influence what is happening here...
Edit: This is the graph as seen from the ROT:
What I basically try to do is capturing a DVB stream that uses VIDEOINFOHEADER2, which is not supported by the standard sample grabber. Although the channel is a public German TV channel without encryption, could it be that this is a DRM issue?
Edit 2: I have attached my renderer to another source (a Blackmagic Intensity Shuttle). It seams that the source causes the issue, because I get samples in the other graph.
Edit 3: Following Roman's Suggestion, I have created a transform filter. The graph looks like
an has unfortunately the same problem, ie I do not get any sample (Transform is not called).
You supposedly chose wrong path of fetching video frames out of media pipeline. So you are implementing a "network renderer", something that terminates the pipeline to further send data to network.
A renderer which accepts the feed sounds appropriate. Implementing a custom renderer, however, is an untypical task and then there is not so much information around on this. Additionally, a fully featured renderer is typically equipped with sample scheduling part, which end of stream delivery - things relatively easy to break when you customize it through inheriting from base classes. That is, while the approach sounds good, you might want to compare it to another option you have, which is...
A combination of Sample Grabber + Null Renderer, two standard filters, which you can attach your callback to and get frames having the pipeline properly terminated. The problem here is that standard Sample Grabber does not support VIDEOINFOHEADER2. With another video decoder you could possibly have the feed decoded info VIDEOINFOHEADER, which is one option. And then improvement of Sample Grabber itself is another solution: DirectX SDK Extras February 2005 (dxsdk_feb2005_extras.exe) was the SDK which included a filter similar to standard Sample Grabber called Grabber \DirectShow\Samples\C++\DirectShow\Filters\Grabber. It is/was available in source code and provided with a good description text file. It is relatively easy to extend to allow it accept VIDEOINFOHEADER2 and make payload data available to your application this way.
The easiest way to get data out of a DirectShow graph, if youњre not going to use
MultiMedia Streaming, is probably to write your own TransInPlace filter, a sub-variety
of a Transform filter. Then connect this filter to the desired stream of data you wish to
monitor, and then run, pause, seek, or otherwise control the graph. The data, as it passes
through the transform filter, can be manipulated however you want. We call this kind of
filter, a Њsample grabberћ. Microsoft released a limited-functionality sample grabber
with DX8.0. This filter is limited because it doesnњt deal with DV Data or mediatypes
with a format of VideoInfo2. It doesnњt allow the user to receive prerolled samples.
(Whatњs a preroll sample? See the DX8.1 docs) Its ЊOneShotћ mode also has some problems.
To add to this, the Grabber sample is pretty simple itself - perhaps 1000 lines of code all together, including comments.
Looks like your decoder or splitter isn't demuxing the video frames. Look further up the chain to see what filters are supplying your renderer pin with data, chances are its only recognising audio.
Try dropping the file into Graphedit (there's a better one on the web BTW) and see what filters it creates.
Then look at the samples in the DirectShow SDK.
I have been making a few experiments with GStreamer by using the gst-launch utility. However, ultimately, the aim is to implement this same functionality on my own application using GStreamer libraries.
The problem is that it's ultimately difficult (at least for someone that is not used to the GStreamer API) to "port" what I test on the command line to C/C++ code.
An example of a command that I may need to port is:
gst-launch filesrc location="CLIP8.mp4" ! decodebin2 ! jpegenc ! multifilesink location="test%d.jpg"
What's the most "straight forward" way/approach to take such command and write it in C on my own app.
Also, as a side question, how could I replace the multifilesink with the possibility of doing this work on memory (I'm using OpenCV to perform a few calculation on a given image that should be extracted from the video). Is it possible to decode directly to memory and use it right away without first saving to the filesystem? It could (and should) be sequential, I mean that would only move on to the next frame after I'm done with processing the current one so that I wouldn't have to keep thousands of frames in memory.
What do you say?
I found the solution. There's a function built in on GStreamer that parses gst-launch arguments and returns a pipeline. The function is called gst_parse_launch and is documented here: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html
I haven't tested it but it's possible the fastest solution to convert what have been testing on the command line to C/C++ code.
You could always pop open the source of gst-launch and grab the bits that parse out the command-line and turn it into a GStreamer pipeline.
That way you can just pass in the "command line" as a string, and the function will return a complete pipeline for you.
By the way, there is an interesting GStreamer element that provides a good way to integrate a processing pipeline into your (C/C++) application: appsink
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-appsink.html
With this one you can basically retrieve the frames from the pipeline in a big C array and do whatever you want with them. You setup a callback function, which will be activated every time a new frame is available from the pipeline thread...
I want to use Qt to create a simple GUI application that can play a local video file. I could use Phonon which does all the work behind the scenes, but I need to have a little more control. I have already succeeded in implementing an GStreamer pipeline using the decodebin and autovideosink elements. Now I want to use a Qt widget to channel the output to.
Has anyone ever succeeded in doing this? (I suppose so since there are Qt-based video players that build upon GStreamer.) Can someone point me in the right direction on how to do it?
Note: This question is similar to my previous posted question on how to connect Qt with an incoming RTP stream. This seemed to be quite challenging. This question will be easier to answer I think.
Update 1
Patrice's suggestion to use libVLC is very helpful already. Here's a somewhat cleaner version of the code found on VLC's website:
Sample for Qt + libVLC.
However, my original question remains: How do I connect GStreamer to a Qt widget?
Update 2
After some experimentation I ended up with this working sample. It depends on GstWidget.h and GstWidget.cpp from my own little GstSupport library. However, take note that is is currently only tested on the Mac version of Qt.
To connect Gstreamer with your QWidget, you need to get the window handle using QWidget::winId() and you pass it to gst_x_overlay_set_xwindow_id();
Rough sample code:
sink = gst_element_factory_make("xvimagesink", "sink");
gst_element_set_state(sink, GST_STATE_READY);
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), widget->winId());
Also, you will want your widget to be backed by a native window which is achieved by setting the Qt::AA_NativeWindows attribute at the application level or the Qt::WA_NativeWindow attribute at the widget level.
Since Phonon is based on gstreamer, the place to look for details is the Phonon source tree (available here: http://gitorious.org/phonon/import/trees/master). For a video player you are most likely going to need a video display widget, such as the gstreamer/videowidget.h (cpp) that in turn used the X11 renderer (gstreamer/x11renderer.h, cpp). The sink used is the xvimagesink, falling back onto the ximagesink if the first cannot be created.
The basic trick is to overlay the VideoWidget with the video output. The X11 handle needed to do this is retrieved using the QWidget::winId method, which is platform specific (as are the sinks, so no biggie).
Also, if overlay is unavailable, a QWidgetVideoSink is used, which converts the video frames into individual frames for the WidgetRenderer class. This class, in turn, makes the current frame available as a QImage object, ready for any type of processing.
So to answer your question - use either overlays (as X11Renderer) or extract individual QImages from the video stream (as QWidgetVideoSink).
VLC version is a QT-based video player (since version 0.99). It allows too to stream or read a stream: You can find all information you need here: http://wiki.videolan.org/Developers_Corner. You only have create an instance of the player and associate it to a widget. Then you have full control on the player.
I have already tested it (on Linux and Windows) playing local music and video files and it works fine.
Give it a try and see by yourself.
Hope that helps.
Edit:
It seems if you want to use VLC, you need to write or find (I do not know if one exists) a GStreamer codec as explain on the videolan wiki. I think I would do that.