I'm trying to develop a little application in which you can load a mp3 file and play it in variable speeds! (I know it already exists :-) )
I'm using Qt and C++. I already have the basic player but I'm stuck with the rate thing, because I want to change the rate smoothly (like in Mixxx) without stopping the playback! The QMediaPlayer always stops if I change the value and creates a gap in the sound. Also I don't want the pitch to change!
I already found something called "SoundTouch" but now I'm completely clueless what to do with it, how to process my mp3 data and how to get it to the player! The "SoundTouch" Library is capable of doing what I want, i got that from the samples on the homepage.
How do I have to import the mp3 file, so I can process it with the SoundTouch functions
How can I play the output from the SoundTouch function? (Perhaps QMediaPlayer can do the job?)
How is that stuff done live? I have to do some kind of stream I guess? So I can change the speed during play and keep on playing without gaps. Graphicaly in my head it has to be something that sits between the data and the player, where all data has to go through live, with a small buffer (20-50 ms or so) behind to avoid gaps during processing future data.
Any help appreciated! I'm also open to any another solution then "SoundTouch" as long as I can stay with Qt/C++!
(Second thing: I want to view a waveform overview aswell as moving part of it (around actual position of the song), so I could also use hints on how to get the waveform data)
Thanks in advance!
As of now (Qt 5.5) this is impossible to do with QMediaPlayer only. You need to do the following:
Decode the audio using GStreamer, FFMpeg or (new) QAudioDecoder: http://doc.qt.io/qt-5/qaudiodecoder.html - this will give you raw PCM stream;
Apply SoundTouch or some other library to this raw data to change the pitch. If GPL is ok, take a look at http://nsound.sourceforge.net/examples/index.html, if you develop proprietary stuff, STK might be a better choice: https://ccrma.stanford.edu/software/stk/
Output the modified data into audio device by using QAudioOutput.
This strategy uses Qt as much as possible, and brings you the best platform coverage (you still lose Android though as it does not support QAudioOutput)
Related
Looking through all the API documentation, I can see how one could create procedural audio, but once an audio file is created, I want it to play on an object, but from I can tell, I believe I need it to play using the function calls PlayEventAtLocation in the UE4 plugin, which means I need to get the sound into an event.
I used to have my setup in Unity 4.x. I want to dynamically construct a wav file in game and play it back. The idea was to have silent audio all over the map that would loop, but play muted. The player when in range would capture audio from this muted audio source at their discretion.
The idea is that you have wav file that plays in game and at any given time I can start grabbing data from where the buffer is at currently until I decide to stop. I take all the data that I created in this new buffer and create a new wav file with it.
Example, like a 20 second file, but I would grab the a 7 second audio clip starting 5 seconds in. So my new audio file would be from 5 to 12. I would think you could do similar things in FMOD because I’ve looked at the recording examples and gapless playback examples, etc. and it does seem to have that same functionality and access to seek the files.
Now I need to migrate this new file that will made in game to something UE4 would use. In FMOD, looking through the .h and .cpp files in the plugin files, I see accept Fmod events only to attach to a UObject. Since I've not made an event in FMOD Studio, I can't use these. What is the sound that createSound makes? is it a wav? and fsb? I just a have a sound, and don't know what format it is.
I can’t use designer to make this sound because its dependent on the player at any given time during play.
In Unity, what I did was access the buffer of an audio file, pull data from the buffer for any given time, and place in a new buffer that I then turned into a file. While pulling data, I would check buffer size and frequency of sound files to make sure I had a gapless playback. (Not perfect, but pretty darn close), I’d use the audio functions in Unity to convert my array of data into a useable audioclip and run it through a sound emitter. It was pretty nice. Because I would keep the original wav file muted, but looping. So the player never knew what they captured. It was really nice.
Since UE4 doesn’t allow access to uncompressed PCM data, I can't do this low level data manipulation in UE4. I had to use FMOD, but its proving to be just as difficult because either its not documented, or lacks the functionality I want. I need help please.
If the data that is created in createsound is just normal pcm wav file data, then I can use a standard AudioComponent, and just save it to a file, and pull it in from UE4. If it is, then I need to turn it into an event so I can use FMODPlayEventAttached from the FMOD plugin library.
I've made a few other posts in various locations that have all been silent. Any comment would be appreciated. I know I've been reading a lot of documentation these last few days on FMOD, but I still may have missed something if people want to point me in a better direction, or if you have something to add, feel free.
Thanks all. I hope I was descriptive enough.
In my fun project, I'm downloading video file from youtube, and writing to a file on local disk. Simultaneously I want to play it. The objective is to cache the file on local disk, so that when I want to see the video again, the app can play it locally, thereby saving bandwidth.
I'm using Python 3.3.1, PyQt4/Phonon and LibVLC. So far, I'm able to do the following things:
Given a youtube watch url, I can download the video file and then play it using both PyQt4/Phonon and LibVLC, independently. It is not streaming.
Since LibVLC supports streaming, I'm able to play the given url through streaming.
The second is very close to what I want to do, but since it doesn't save the file on disk, next time I cannot play the same video locally.
I'm looking for some guidelines as to how to proceed from here. In particular, how to play a video from an incomplete file which is still being written into.
I'm completely fine with any API (that does the job) as long as it is:
Python 3.3.1 (preferably)
C
C++.
And I'm looking for alternative approaches also, if my current approach is not correct or makes the problem more difficult than it actually is.
VLC supports playback of incomplete files, so if you're up for a bit of non-blocking I/O and/or parallel code, you should be able to start the download and after a sufficient amount has been written, use LibVLC to start playback. Depending on what compression algorithm is used, you may need to buffer enough so that there's always several seconds of data left in the buffer -- if I recall correctly, some of the more modern algorithms record deltas and index information going forward and backward.
You may get a few warnings / error messages / Exceptions, but I would not assume that they're fatal -- let the playback quality be your guide!
This is somewhat similar to some of the suggestions from the comments above, and is also related to a lot of what #abarnert said, to a lesser extent some of the exchange with #StackedCrooked.
We have a requirement to lets users record a video of our 3D application. I can already grab the individual rendered frames so this question is specifically about how to write frames into a video file.
I don't think writing each frame as a separate file and post-processing is a workable option.
I can look at options to record to a simple video file for later optimising/encoding, or writing directly to a sensibly encoded format.
FFmpeg was suggested in another post but it looks a bit daunting to me. Is it the best option, if not what can be suggested? We can work with LGPL but not full GPL.
We're working on Windows (Win32 not MFC) in C++. Sample/pseudo code with your recommended library is very much appreciated... basically after how to do 3 functions:
startRecording() does whatever initialization is needed
recordFrame() takes pointer to frame data and encodes it, ideally with timing data
endRecording() finalizes the video file, shuts down video system, etc
Check out the sources to Taksi on sourceforge. http://taksi.sourceforge.net/
You need 2 things.
1. A code to compress the frames.
2. A container file format. Like AVI or MPG.
Taksi useses the old VideoForWindows API and AVI not the newer COM API's but it still might work for you.
I'm trying to get mpg123 audio decoder to work with QT on windows. How do i play the decoded audio data at the right speed with Qmultimedia module in push mode. Currently i'm using simple timer to get it to play audio but it's not very efficient way to do it, if I do anything else at the same time audio get all distorted. Is there any better way to send the decoded data to audio output? It would be nice if anyone could point me to any nice examples using Qmultimedia module and Qaudiooutput class. I've tried to figure out QT example project "audiooutput" but it seems that it's also using timer to send audio to output in push mode.. Hope that I'm not too confusing.
I also had to figure that out and I would also suggest using the Phonon framework to do this.
It uses Windows Media Player as host on Windows, QuickTime on Mac and some KDE stuff on Linux.
So it's pretty platform independent.
If you need more low-level functionality, you should take a look into an open-source project called portaudio. It's very easy to use and you can manipulate or even fill buffers from code.
I used it to build an oscillator.
Hope that helps!
Best,
guitarflow
I wanted to get some ideas one how some of you would approach this problem.
I've got a robot, that is running linux and uses a webcam (with a v4l2 driver) as one of its sensors. I've written a control panel with gtkmm. Both the server and client are written in C++. The server is the robot, client is the "control panel". The image analysis is happening on the robot, and I'd like to stream back the video from the camera to the control panel for two reasons:
A) for fun
B) to overlay image analysis results
So my question is, what are some good ways to stream video from the webcam to the control panel as well as giving priority to the robot code to process it? I'm not interested it writing my own video compression scheme and putting it through the existing networking port, a new network port (dedicated to video data) would be best I think. The second part of the problem is how do I display video in gtkmm? The video data arrives asynchronously and I don't have control over main() in gtkmm so I think that would be tricky.
I'm open to using things like vlc, gstreamer or any other general compression libraries I don't know about.
thanks!
EDIT:
The robot has a 1GHz processor, running a desktop like version of linux, but no X11.
Gstreamer solves nearly all of this for you, with very little effort, and also integrates nicely with the Glib event system. GStreamer includes V4L source plugins, gtk+ output widgets, various filters to resize / encode / decode the video, and best of all, network sink and sources to move the data between machines.
For prototype, you can use the 'gst-launch' tool to assemble video pipelines and test them, then it's fairly simply to create pipelines programatically in your code. Search for 'GStreamer network streaming' to see examples of people doing this with webcams and the like.
I'm not sure about the actual technologies used, but this can end up being a huge synchronization ***** if you want to avoid dropped frames. I was streaming a video to a file and network at the same time. What I eventually ended up doing was using a big circular buffer with three pointers: one write and two read. There were three control threads (and some additional encoding threads): one writing to the buffer which would pause if it reached a point in the buffer not read by both of the others, and two reader threads that would read from the buffer and write to the file/network (and pause if they got ahead of the producer). Since everything was written and read as frames, sync overhead could be kept to a minimum.
My producer was a transcoder (from another file source), but in your case, you may want the camera to produce whole frames in whatever format it normally does and only do the transcoding (with something like ffmpeg) for the server, while the robot processes the image.
Your problem is a bit more complex, though, since the robot needs real-time feedback so can't pause and wait for the streaming server to catch up. So you might want to get frames to the control system as fast as possible and buffer some up in a circular buffer separately for streaming to the "control panel". Certain codecs handle dropped frames better than others, so if the network gets behind you can start overwriting frames at the end of the buffer (taking care they're not being read).
When you say 'a new video port' and then start talking about vlc/gstreaming i'm finding it hard to work out what you want. Obviously these software packages will assist in streaming and compressing via a number of protocols but clearly you'll need a 'network port' not a 'video port' to send the stream.
If what you really mean is sending display output via wireless video/tv feed that's another matter, however you'll need advice from hardware experts rather than software experts on that.
Moving on. I've done plenty of streaming over MMS/UDP protocols and vlc handles it very well (as server and client). However it's designed for desktops and may not be as lightweight as you want. Something like gstreamer, mencoder or ffmpeg on the over hand is going to be better I think. What kind of CPU does the robot have? You'll need a bit of grunt if you're planning real-time compression.
On the client side I think you'll find a number of widgets to handle video in GTK. I would look into that before worrying about interface details.