Play audio as fast as possible in GStreamer - c++

I'm new to GStreamer, so sorry if it's kind of a dumb question, but how can I stream audio data through my pipeline as fast as possible?
I use a playbin, and I have a bunch of songs that I'd like to send through it at the greatest speed available. I'm aware of the concept of frame stepping, I also read this tutorial: https://gstreamer.freedesktop.org/documentation/tutorials/basic/playback-speed.html, but still can't figure out how to do this simply without keyboard handling. Can someone explain it to me?

Related

C++ playing audio live from byte array

I am using C++ and have the sample rate, number of channels, and bit depth for my audio. I also have a char array containing the audio that I want to play. I am look for something along the lines of, sending a quarter of a second (or some other short amount of audio) to be played, then sending some more, etc. Is this possible, and if it is how would it be done.
Thanks for any help.
I've done this before with the library OpenAL.
This would require a pretty involved answer and hopefully the OpenAL documentation can walk you through it all, but here is the source example which I wrote that plays audio streaming in from a mumble server in nodejs.
You may need to ask a more specific question to get a better answer as this is a fairly large topic. It may also help to list other technologies you may be using such as target operating system(s) and if you are using any libraries already. Many desktop and game engines already have api's for playing simple sounds and using OpenAL may be much more complex than you really need.
But, briefly, the steps of the solution are:
Enumerate devices
Capture a device
Stream data to device
enqueue audio to buffer alSourceQueueBuffers
play queued buffer alSourcePlay

Changin mp3 speed in Qt and C++ [QMediaPlayer]

I'm trying to develop a little application in which you can load a mp3 file and play it in variable speeds! (I know it already exists :-) )
I'm using Qt and C++. I already have the basic player but I'm stuck with the rate thing, because I want to change the rate smoothly (like in Mixxx) without stopping the playback! The QMediaPlayer always stops if I change the value and creates a gap in the sound. Also I don't want the pitch to change!
I already found something called "SoundTouch" but now I'm completely clueless what to do with it, how to process my mp3 data and how to get it to the player! The "SoundTouch" Library is capable of doing what I want, i got that from the samples on the homepage.
How do I have to import the mp3 file, so I can process it with the SoundTouch functions
How can I play the output from the SoundTouch function? (Perhaps QMediaPlayer can do the job?)
How is that stuff done live? I have to do some kind of stream I guess? So I can change the speed during play and keep on playing without gaps. Graphicaly in my head it has to be something that sits between the data and the player, where all data has to go through live, with a small buffer (20-50 ms or so) behind to avoid gaps during processing future data.
Any help appreciated! I'm also open to any another solution then "SoundTouch" as long as I can stay with Qt/C++!
(Second thing: I want to view a waveform overview aswell as moving part of it (around actual position of the song), so I could also use hints on how to get the waveform data)
Thanks in advance!
As of now (Qt 5.5) this is impossible to do with QMediaPlayer only. You need to do the following:
Decode the audio using GStreamer, FFMpeg or (new) QAudioDecoder: http://doc.qt.io/qt-5/qaudiodecoder.html - this will give you raw PCM stream;
Apply SoundTouch or some other library to this raw data to change the pitch. If GPL is ok, take a look at http://nsound.sourceforge.net/examples/index.html, if you develop proprietary stuff, STK might be a better choice: https://ccrma.stanford.edu/software/stk/
Output the modified data into audio device by using QAudioOutput.
This strategy uses Qt as much as possible, and brings you the best platform coverage (you still lose Android though as it does not support QAudioOutput)

how to use c++ to do data acquisition from frame grabber

We have an "MC1362 Camera" and an "Inspecta-5" frame grabber in our lab. There is program in LABVIEW11 which gets the data from a frame grabber, however as the Labview is slow my supervisor has told me to write a program in c++ to get the data from the frame grabber. I have no idea how to write a c++ program to connect to a frame grabber and do the data acquisition. I know how to write software in c++, but have never tried programming to connect to hardware and read data from it. Is there any specific library or framework which can help me, or any tutorial?
Please, if anybody knows, help me in this matter.
Update:just to add, we are doing medical image analysis, and a laser illuminate a subject, so camera will take pictures and pass it to the computer. I need to grab the pictures and analysis them.
You basically have a couple of options,
1 see if there is an SDK for the grabber card, if there is this is usually easier then option 2 but is of course restricted to work with that grabber or familly of grabber cards, we do it this way with the eurysys grabber cards.
2 assuming you are running on a windows platform, implement a DirectShow filtergraph and write your own ouput filter to get the data, the SDK for DirectShow is quiet good and has many examples. This approach is far more flexible and you should be able to use a number of grabber but its also alot more complex, we do it this way for USB / some other inbuilt grabbers.
Our software is done in Delphi 7 but its just importing DLLs, for C++ should be no problem and most SDK's are written round C++ anyway.
I know its not much but its a place to start.
Update
Just done a quick Google search and there is an SDK for that Grabber and on first looks its seams fairly straight forward.

Gain sole control of Audio Output, DirectSound

I am creating a basic signal generator and decided to use my audio card as the analogue output. I chose to use DirectSound because... it seemed like a good option.
I have it up and running quite nicely, but I now realize that my code using secondary buffers and as such any other sounds on the computer get mixed in with my generated signal. This is something of an issue, as when I'm running a motor I don't want it to get sent an MSN poke noise as a command.
In order to gain total control I've attempted to take over the system by setting my cooperative level to DSSCL_WRITEPRIMARY. All in all this strategy is really giving me a headache as I am running into error after error trying to get this set up. The documentation on using the primary buffer isn't great and I can't find any really good examples.
So my question is:
Does anyone have a good, working example of taking over and writing to the primarybuffer.
Is there a simpler way of outputing a waveform to the audio card, and ensuring that my application has full and sole control?
Thank you
only thing I've seen related is:
http://blogs.msdn.com/b/matthew_van_eerde/archive/2009/04/03/sample-wasapi-exclusive-mode-event-driven-playback-app-including-the-hd-audio-alignment-dance.aspx

streaming video to and from multiple sources

I wanted to get some ideas one how some of you would approach this problem.
I've got a robot, that is running linux and uses a webcam (with a v4l2 driver) as one of its sensors. I've written a control panel with gtkmm. Both the server and client are written in C++. The server is the robot, client is the "control panel". The image analysis is happening on the robot, and I'd like to stream back the video from the camera to the control panel for two reasons:
A) for fun
B) to overlay image analysis results
So my question is, what are some good ways to stream video from the webcam to the control panel as well as giving priority to the robot code to process it? I'm not interested it writing my own video compression scheme and putting it through the existing networking port, a new network port (dedicated to video data) would be best I think. The second part of the problem is how do I display video in gtkmm? The video data arrives asynchronously and I don't have control over main() in gtkmm so I think that would be tricky.
I'm open to using things like vlc, gstreamer or any other general compression libraries I don't know about.
thanks!
EDIT:
The robot has a 1GHz processor, running a desktop like version of linux, but no X11.
Gstreamer solves nearly all of this for you, with very little effort, and also integrates nicely with the Glib event system. GStreamer includes V4L source plugins, gtk+ output widgets, various filters to resize / encode / decode the video, and best of all, network sink and sources to move the data between machines.
For prototype, you can use the 'gst-launch' tool to assemble video pipelines and test them, then it's fairly simply to create pipelines programatically in your code. Search for 'GStreamer network streaming' to see examples of people doing this with webcams and the like.
I'm not sure about the actual technologies used, but this can end up being a huge synchronization ***** if you want to avoid dropped frames. I was streaming a video to a file and network at the same time. What I eventually ended up doing was using a big circular buffer with three pointers: one write and two read. There were three control threads (and some additional encoding threads): one writing to the buffer which would pause if it reached a point in the buffer not read by both of the others, and two reader threads that would read from the buffer and write to the file/network (and pause if they got ahead of the producer). Since everything was written and read as frames, sync overhead could be kept to a minimum.
My producer was a transcoder (from another file source), but in your case, you may want the camera to produce whole frames in whatever format it normally does and only do the transcoding (with something like ffmpeg) for the server, while the robot processes the image.
Your problem is a bit more complex, though, since the robot needs real-time feedback so can't pause and wait for the streaming server to catch up. So you might want to get frames to the control system as fast as possible and buffer some up in a circular buffer separately for streaming to the "control panel". Certain codecs handle dropped frames better than others, so if the network gets behind you can start overwriting frames at the end of the buffer (taking care they're not being read).
When you say 'a new video port' and then start talking about vlc/gstreaming i'm finding it hard to work out what you want. Obviously these software packages will assist in streaming and compressing via a number of protocols but clearly you'll need a 'network port' not a 'video port' to send the stream.
If what you really mean is sending display output via wireless video/tv feed that's another matter, however you'll need advice from hardware experts rather than software experts on that.
Moving on. I've done plenty of streaming over MMS/UDP protocols and vlc handles it very well (as server and client). However it's designed for desktops and may not be as lightweight as you want. Something like gstreamer, mencoder or ffmpeg on the over hand is going to be better I think. What kind of CPU does the robot have? You'll need a bit of grunt if you're planning real-time compression.
On the client side I think you'll find a number of widgets to handle video in GTK. I would look into that before worrying about interface details.