My first go at using SoundEffect with QML, and I'm getting mixed results with no clear understanding of why. I can successfully use QML SoundEffect in user interface within an embedded C++ device. The thing I cannot solve is why some WAV files will play perfectly clear, and some will not.
I'm certain my code is correct...its something about how the audio is interpreted. I cannot share the WAV files I'm using...but here's what's happening:
I have two WAV files:
wav_file1_that_works.wav (which is 83kb)
and
wav_file2_that_does_not work.wav (which is 110kb)
Both of these files play just fine in VLC or Media Player or whatever. But when ran through the QML function to play as a feedback for touch on the device, the first WAV file plays just fine, while the second one does not. It does not appear to be a hardware issue as this same issue comes up exactly the same when working on virtual environment. I'm suspecting there is some limitation to using WAV audio within the QT/QML environment? But I cannot find any limits in the documentation. My only suspicion is the file size, or some other specific sound file requirement.
First I declare the sound link to the file:
SoundEffect {
id: playSound
source: "qrc:/wav_file2_that_does_not work.wav"
}
Then on the UI event it's played (not the exact code, but the event certainly works like this:
MyUiItem {
onMyUiTouched: {
playSound.play();
}
}
and file 1 plays perfectly, and file 2 plays, but with a very distorted scratchy sounds.
I probably don't know enough about how WAV file encoding works, but on the surface both files seems to be encoded correctly.
I solved this by refactoring how the app compiles as my WAV file was getting compressed. So unfortunately this was something I discovered that if I let my enterprise deployment system do its thing it compresses everything including all multi-media unless I apply certain parameters to not compress. And so now this works. Thanks for the help.
Related
I'm working on a c++ project that generates frames to be converted to a video later.
The project currently dumps all frames as jpg or png files in a folder and then I run ffmpeg manually to generate a mp4 video file.
This project runs on a web server and an ios/android app (under development) will call this web server to have the video generated and downloaded.
The web service is pretty much done and working fine.
I don't like this approach for obvious reasons like a server dependency, cost etc...
I successfully created a POC that exposes the frame generator lib to android and I got it to save the frames in a folder, my next step now is to convert it to video. I considered using any ffmpeg for android/ios lib and just call it when the frames are done.
Although it seems like I fixed half of the problem, I found a new one which is... each frame depending on the configuration could end up having 200kb+ in size, so depending on the amount of frames, it will take a lot of space from the user's device.
I'm sure this will become a huge problem very easily.
So I believe that the ideal solution would be to generate the mp4 file on demand as each frame is created, so in the end there would be no storage space being taken as I woudn't need to save a file for the frame.
The problem is that I don't know how to do that, I don't know much about ffmpeg, I know it's open source but I have no idea how to include a reference to it from the frames generator and generate the video "on demand".
I heard about libav as well but again, same problem...
I would really appreciate any sugestion on how to do it. What I need is basically a way to generate a mp4 video file given a list of frames.
thanks for any help!
I am developing app witch may sample audio from microphone. I have used QAudioRecorder and QAudioProbe to sample. Everything works fine. But I have just realized the QAudioRecorder saves recorded audio to my documents. Maybe I should use QAudioInput instead. I will do it all again if I must. But, is there any way to disable creation of that audio file? I have my samples. I don't need them on my harddrive. Thank you for help.
Unfortunately, a storage location is inherent to QAudioRecorder, and you must use a lower-level way to "capture audio" without storing it to disk.
Here is a minimal example using QAudioInput: http://doc.qt.io/qt-5/qtmultimedia-multimedia-audioinput-example.html
Looking through all the API documentation, I can see how one could create procedural audio, but once an audio file is created, I want it to play on an object, but from I can tell, I believe I need it to play using the function calls PlayEventAtLocation in the UE4 plugin, which means I need to get the sound into an event.
I used to have my setup in Unity 4.x. I want to dynamically construct a wav file in game and play it back. The idea was to have silent audio all over the map that would loop, but play muted. The player when in range would capture audio from this muted audio source at their discretion.
The idea is that you have wav file that plays in game and at any given time I can start grabbing data from where the buffer is at currently until I decide to stop. I take all the data that I created in this new buffer and create a new wav file with it.
Example, like a 20 second file, but I would grab the a 7 second audio clip starting 5 seconds in. So my new audio file would be from 5 to 12. I would think you could do similar things in FMOD because I’ve looked at the recording examples and gapless playback examples, etc. and it does seem to have that same functionality and access to seek the files.
Now I need to migrate this new file that will made in game to something UE4 would use. In FMOD, looking through the .h and .cpp files in the plugin files, I see accept Fmod events only to attach to a UObject. Since I've not made an event in FMOD Studio, I can't use these. What is the sound that createSound makes? is it a wav? and fsb? I just a have a sound, and don't know what format it is.
I can’t use designer to make this sound because its dependent on the player at any given time during play.
In Unity, what I did was access the buffer of an audio file, pull data from the buffer for any given time, and place in a new buffer that I then turned into a file. While pulling data, I would check buffer size and frequency of sound files to make sure I had a gapless playback. (Not perfect, but pretty darn close), I’d use the audio functions in Unity to convert my array of data into a useable audioclip and run it through a sound emitter. It was pretty nice. Because I would keep the original wav file muted, but looping. So the player never knew what they captured. It was really nice.
Since UE4 doesn’t allow access to uncompressed PCM data, I can't do this low level data manipulation in UE4. I had to use FMOD, but its proving to be just as difficult because either its not documented, or lacks the functionality I want. I need help please.
If the data that is created in createsound is just normal pcm wav file data, then I can use a standard AudioComponent, and just save it to a file, and pull it in from UE4. If it is, then I need to turn it into an event so I can use FMODPlayEventAttached from the FMOD plugin library.
I've made a few other posts in various locations that have all been silent. Any comment would be appreciated. I know I've been reading a lot of documentation these last few days on FMOD, but I still may have missed something if people want to point me in a better direction, or if you have something to add, feel free.
Thanks all. I hope I was descriptive enough.
I'm trying to develop a little application in which you can load a mp3 file and play it in variable speeds! (I know it already exists :-) )
I'm using Qt and C++. I already have the basic player but I'm stuck with the rate thing, because I want to change the rate smoothly (like in Mixxx) without stopping the playback! The QMediaPlayer always stops if I change the value and creates a gap in the sound. Also I don't want the pitch to change!
I already found something called "SoundTouch" but now I'm completely clueless what to do with it, how to process my mp3 data and how to get it to the player! The "SoundTouch" Library is capable of doing what I want, i got that from the samples on the homepage.
How do I have to import the mp3 file, so I can process it with the SoundTouch functions
How can I play the output from the SoundTouch function? (Perhaps QMediaPlayer can do the job?)
How is that stuff done live? I have to do some kind of stream I guess? So I can change the speed during play and keep on playing without gaps. Graphicaly in my head it has to be something that sits between the data and the player, where all data has to go through live, with a small buffer (20-50 ms or so) behind to avoid gaps during processing future data.
Any help appreciated! I'm also open to any another solution then "SoundTouch" as long as I can stay with Qt/C++!
(Second thing: I want to view a waveform overview aswell as moving part of it (around actual position of the song), so I could also use hints on how to get the waveform data)
Thanks in advance!
As of now (Qt 5.5) this is impossible to do with QMediaPlayer only. You need to do the following:
Decode the audio using GStreamer, FFMpeg or (new) QAudioDecoder: http://doc.qt.io/qt-5/qaudiodecoder.html - this will give you raw PCM stream;
Apply SoundTouch or some other library to this raw data to change the pitch. If GPL is ok, take a look at http://nsound.sourceforge.net/examples/index.html, if you develop proprietary stuff, STK might be a better choice: https://ccrma.stanford.edu/software/stk/
Output the modified data into audio device by using QAudioOutput.
This strategy uses Qt as much as possible, and brings you the best platform coverage (you still lose Android though as it does not support QAudioOutput)
I am having the same problem as this question: SimpleAudioEngine, playing .caf files (which is closed)
The solution, however, does not work.
I have Battle.wav, which works just fine with
[[SimpleAudioEngine sharedEngine]playBackgroundMusic:#"Battle.wav"];
Now, I want to convert it to .caf. According to the answer in that question, I should use this terminal line:
afconvert -f caff -d LEI16#22050 Battle.wav
The resulting file, however, does not work. When I use:
[[SimpleAudioEngine sharedEngine]playBackgroundMusic:#"Battle.caf"];
Xcode displays the following message when I do that:
AudioStreamBasicDescription: 2 ch, 44100 Hz, 'lpcm' (0x00000C2C)
8.24-bit little-endian signed integer, deinterleaved
I don't really know what does that mean. What I do know, however, is that the file indeed does not sound.
The question at playing a .caf file: works fine in simulator but not in iPhone doesn't help either (and the problem they have doesn't seem the same basic thing I'm having here).
Cocos2d-iphone 1.0.1, iPhone 4.
Try opening and exporting the file as AIFF (.caf) using Audacity (free). That should work if it is a file format problem.
One thing that may be an issue is the caf file using 2 channels. It may be as simple as SimpleAudioEngine not supporting stereo sound effects. If you want background music, use .mp3 since they can be decoded in hardware, and are generally a lot smaller.