I'm using libVLC to play a video file. If I use my code in as standalone video player, I am having no issues. The video plays very well. I can pause and play the video as I like.
When I use the same code, without modifications, in a plugin, and then play the same file, something unique happens: VLC creates two audio streams for the same video file. Now if I pause the video using libvlc_media_player_pause(...), it pauses the video and one audio stream. The other audio stream continues playing.
Any suggestions as to why this could be happening?
The application itself is written in Qt5. I have tested this issue with both audio and video files.
LibVLC version is 3.0.0
Header file and Source file are pastebin links
The mistake I did was in the code for plugin. Two instances of NBAVPlayer were created in the plugin code leading to two audio streams, one visible video stream and one hidden video stream. I have fixed the issue with the plugin, and now everything works properly.
Related
I have downloaded latest webrtc native code and tested the peerconnection example. In this example video can only come from devices configured on the system (it looks for devices in /dev/videoX).
I am wondering if it is possible to get the video from a video file at my local machine and pass its frames to VideoFrame in WebRTC so peerA, who is the peerconnection_client of the example. Then this video would be passed to peerB, who is a web client on browser.
Basically: My video source should be a video file.
If you accept to get away from Google's reference implementation, the streamer example of libdatachannel shows how to stream a video file to a browser.
I'm currently building a project on C++ using Visual Studio on Windows 8. This application captures video from camera and triggers some virtual animations in real-time, with some sounds being played along with the animations.
The user has the option to record the experience in video and sound. I already am able to record video, now I want to create a audio track of the sounds that are being played by the application, to later fuse both video and audio files.
So, which is the best way to record audio output from an application in windows?
Let me stress that I do NOT want to record audio from any input devices (such as a microphone), only from the application itself.
Best regards.
There is no recording of application output. If you generate audio on your own, you make a copy for the recording purposes, mix if you have multiple sources, and then use one of the APIs to produce a file depending on your preferences: directly writing a WAV file, Windows Media audio files (ASF/WMA), DriectShow, Media Foundation, third party libraries.
Real playback audio data is being mixed and sent for further playback. Sometimes you can enable loopback recording to capture fully mixed output (not just of specific application through) as if it is a capture from realtime audio input device.
I am creating a video player if Phonon and Qt. everything is working fine, but when I have a video in my playlist that does not have audio I wish to play another audio.
how can I do that? I mean, how can I detect that the video has no audio?
EDIT: By no audio I meant "no audio channel"
Qt 5 might help you out. Check out Phonon::Gstreamer::MediaObject. The API is similar to the ordinary MediaObject, but with some additional functions. The one you want is audioAvailable().
I have a Windows native desktop app (C++/Delphi), and I'm successfully using Directshow to display live video in it from a 'local' video capture device.
The next thing I want to do is display video from a 'remote' capture device, streamed over the LAN.
To stream the video, I guess I can use something like Expression Encoder or VLC, but I'm not sure what's the easiest way to receive/decode the streamed video. Inserting an ActiveX VLC or Flash player might be one option (although the licensing may be an issue then), but I was wondering if there's any way to achieve this with Directshow...
Application needs to run on XP, and the video decoding should ideally be royalty free.
Suggestions, please!
Using Directshow for receiving and displaying your video can work, but the simplicity, "openness" and performances will depend on the video format and streaming method you'll be using.
A lot of open/free source filters exist for RTSP (e.g. based on live555), but you may also find that creating your own source filter is a better fit.
The best solution won't be the same for H264 diffusion through RTP/RTSP and for MJPEG diffusion through simple UDP.
My application is transforming an AVI video file into another AVI file. I use
the OpenCV library. Unfortunately videos created with OpenCV have no sound as the library does not support audio.
Is there any easy way to copy the audio track from one video file to another? Maybe FFmpeg?
My application is written in Visual C++.
You can use FFmpeg. The easiest way would be to just use the command line tool to extract/reassemble. If you need your application to do it itself, looking into the sources for how they do it should help.
Alternatively, as you mention VC++, why not use DirectShow? It should not be too difficult to sink the audio into a file for extraction and later sink the video/audio mix into a file for composition.