Qt 5, QCamera, video streaming by network - c++

Can you guys help me, what the best way for streaming video and sound from webcam over network? I have a problem, because i'm need to use Qt for that. Now i'm using QCamera and QVideoWidget just for show videostream to screen, but i need to send this stream to server, which will show this stream to QVideoWidget too. May be need to use QMediaRecorder for that, but i'm not see some methods, where i can get audio/video raw-data frame, or may be other, anyway, how do you see for this, what the best way for streaming. Thanks a lot!

Related

QMediaPlayer - modify audio on the fly

I'm researching options for creating a simple video player. What I'd like to do, is to apply some audio processing (e.g. low pass filter for simplicity) while playing back the video. I've looked at Qt multimedia API, so here's my main question:
How could I edit the audio output of a QMediaPlayer? Do I need some lower level APIs?
Additionally, if some other technologies would suit this purpose better or provide better open source libraries, feel free to suggest. I have experience with C# as well.
QMediaPlayer doesn't allow low-level access to the audio data.
I'd suggest to you to use the QAudioOutput and QAudioDecoder classes for your purpose.
The QAudioDecoder produces QAudioBuffer objects. You can access the data() of these objects, process it (modify it) and feed it to the QIODevice that is returned by the start() method of the QAudioOutput object.
This will be the audio playback path of your player.
For the video you'll still use a muted QMediaPlayer to decode the video frames from the same file and output them to a QAbstractVideoSurface. You'll then need an algorithm to sync the video and audio frames produced by the above two methods.

Qt5 extracting audio from video

I have a task: user chooses a video file, I need to get an audio track from it and give an opportunity to play it with equalizer and then save this audio in mp3.
I found an example of media player in Qt. Playing video with Qt is very easy as the example has shown so I'd like to realize this task only with Qt API. The only way I know to get audio track from video is to use libAV or similar libraries. Is there an opportunity to get audio from video with Qt API? Also I haven't found the way to save an audio in a specific format with Qt. Is it possible?
Thanks a lot.

Is it feasible to use video render with source reader technique?

I am writing a live streaming application using media foundation. I am using source reader technique. Is it possible to use video renderer with source reader technique?
If yes, please share your ideas.
Thanks in advance.
Yes, but you have to put some effort.
Video renderer can be used within Media Session pipeline, in which case it is managed by the pipeline and you don't need to bother with small details like sample delivery, seeking, flushing etc. As you preferred to use alternative higher level API Source Reader, you no longer have video rendering as a part of it - you are supposed to manage video renderer, if you need it, yourself.
See also:
How to send IMFSample to EVR Media Sink
Painting frames while media session is paused

Modify audio output stream of QMediaPlayer

I'm trying to port an audio player from Qt4 to Qt5, and from Phonon to QtMultimedia.
Everything works fine with a QMediaPlayer, except that I can't find any way to modify the audio stream between the output of QMediaPlayer and the system sound.
I would like to apply a post-processing effect to the audio.
Any way to do this? Maybe by sub-classing QMediaPlayer? I've checked the QAudioOutput and QAudioDecoder classes but I would have to implement a lot of things to get the same behaviour as QMediaPlayer.
Thanks!

C++ DirectShow Video and Audio capture - beginning

I have finally managed to drop working with VFW after several problems I have encountered during the application development.
Thanks to StackOverflow, I am now aware that VFW is obsolete and wish to switch to DShow, to let my application work with Vista/W7.
Unfortunately, the work has been made and application has been shipped to the client, but as soon as we realized we have troubles with frame rates on Vista / W7 - we decided to rewrite the video class and use DirectShow to establish a good audio/video capture engine for webcameras.
This will be tricky, as we never coded with DShow, and right now we are looking for few specific examples of how to:
Connect to a selected webcamera
similar to: capDriverConnect
Set camera resolution to 640x480 and RGB24 format ( we need to do RGB24 to YUV420 for each frame )
similar to: capSetVideoFormat / capCaptureSetSetup
Set audio capturing for this webcamera
similar to: capSetAudioFormat
Register two callbacks:
One for video frame ( we will pass frames to video encoder )
similar to: capSetCallbackOnVideoStream
One for wave buffer ( we will pass wave buffer to audio encoder )
similar to: capSetCallbackOnWaveStream
Be able to show a preview window somewhere on parent window
similar to: capPreview
Perform Start/Stop operation when needed
Start - would mean, connect and start capturing audio/video frames
Disconnect - would mean, stop capturing audio video frames
Perform drawing to the actual frame
similar to:
SetBitmapBits(CameraInput.GetFrameBitmap(),w*h*3,vdhdr->lpData);
// draw something with gdi+
GetBitmapBits(CameraInput.GetFrameBitmap(),w*h*3,vdhdr->lpData);//set back the frame with data
All of the above was already made with VFW, but as I wrote before we unfortunately need to switch do Direct Show.
Is there anyone who could help us out achieving a class that could rescue us from months of studying Direct Show ?
Your best bet for examples will be the ones from Microsoft.
Your questions are still phrased in terms of VFW so it's hard to answer them as written. For example, in DirectShow you wouldn't register a callback for to encode a video frame. Instead, you'd develop an encoder filter that would receive data from the capture source.
As an alternative, if you're only targeting Vista and later, there is the Microsoft Media Foundation. I have no experience with it so I don't know how the learning curve compares to DirectShow.
I'd suggest you to build a graph on GraphEdit using FFDshow filters.
EditGraph is making a demonstration of building a graph on DirectShow
I don't think you need you build the filter class by your own. After you'll build the graph and you'd be able to watch the video using GraphEdit. Implementing the graph is a very simple task.