What whappens when I don't provide the -hd file? - cocos2d-iphone

Cocos2d-iphone 1.0.1
I enable retina display for my app. I provide myimage.png,but I don't provide myimage-hd.png. When I run the game, I indeed get a message saying that the HD file was not found. Great. However, the game doesn't crash: I see in my game that there is the sprite apparently using the sd file.
What happened? Is it using the sd file and then resizing it? Is it still retina, but with a smaller version of my sprite?

It will fall back to using the SD file. The SD file will be half the size as the HD version of the file would be if it had been supplied. So if you later add the -hd file you'll see a much larger version of the image. That's something you want to avoid.
It's generally not a good idea to support Retina only partially. If you use -hd then it's recommended to use it for all assets indiscriminately. The same goes of course for -ipad and -ipadhd.
Something that I learned the hard way today is if you supply only the -hd or -ipad version but don't include the regular version without the suffix, cocos2d will try to load the SD image (because that's what it always checks first). Since that fails cocos2d (v1.1) will return a nil texture instead of looking for the -hd or -ipad version. To fix this use CCFileUtils setIpadSuffix and setiPhoneRetinaSuffix to something other than -hd or -ipad. In my case it worked to just set them to the empty string (no suffix).

Related

WAV file with QML SoundEffect audio playback is distorted

My first go at using SoundEffect with QML, and I'm getting mixed results with no clear understanding of why. I can successfully use QML SoundEffect in user interface within an embedded C++ device. The thing I cannot solve is why some WAV files will play perfectly clear, and some will not.
I'm certain my code is correct...its something about how the audio is interpreted. I cannot share the WAV files I'm using...but here's what's happening:
I have two WAV files:
wav_file1_that_works.wav (which is 83kb)
and
wav_file2_that_does_not work.wav (which is 110kb)
Both of these files play just fine in VLC or Media Player or whatever. But when ran through the QML function to play as a feedback for touch on the device, the first WAV file plays just fine, while the second one does not. It does not appear to be a hardware issue as this same issue comes up exactly the same when working on virtual environment. I'm suspecting there is some limitation to using WAV audio within the QT/QML environment? But I cannot find any limits in the documentation. My only suspicion is the file size, or some other specific sound file requirement.
First I declare the sound link to the file:
SoundEffect {
id: playSound
source: "qrc:/wav_file2_that_does_not work.wav"
}
Then on the UI event it's played (not the exact code, but the event certainly works like this:
MyUiItem {
onMyUiTouched: {
playSound.play();
}
}
and file 1 plays perfectly, and file 2 plays, but with a very distorted scratchy sounds.
I probably don't know enough about how WAV file encoding works, but on the surface both files seems to be encoded correctly.
I solved this by refactoring how the app compiles as my WAV file was getting compressed. So unfortunately this was something I discovered that if I let my enterprise deployment system do its thing it compresses everything including all multi-media unless I apply certain parameters to not compress. And so now this works. Thanks for the help.

Convert frames to video on demand

I'm working on a c++ project that generates frames to be converted to a video later.
The project currently dumps all frames as jpg or png files in a folder and then I run ffmpeg manually to generate a mp4 video file.
This project runs on a web server and an ios/android app (under development) will call this web server to have the video generated and downloaded.
The web service is pretty much done and working fine.
I don't like this approach for obvious reasons like a server dependency, cost etc...
I successfully created a POC that exposes the frame generator lib to android and I got it to save the frames in a folder, my next step now is to convert it to video. I considered using any ffmpeg for android/ios lib and just call it when the frames are done.
Although it seems like I fixed half of the problem, I found a new one which is... each frame depending on the configuration could end up having 200kb+ in size, so depending on the amount of frames, it will take a lot of space from the user's device.
I'm sure this will become a huge problem very easily.
So I believe that the ideal solution would be to generate the mp4 file on demand as each frame is created, so in the end there would be no storage space being taken as I woudn't need to save a file for the frame.
The problem is that I don't know how to do that, I don't know much about ffmpeg, I know it's open source but I have no idea how to include a reference to it from the frames generator and generate the video "on demand".
I heard about libav as well but again, same problem...
I would really appreciate any sugestion on how to do it. What I need is basically a way to generate a mp4 video file given a list of frames.
thanks for any help!

Changin mp3 speed in Qt and C++ [QMediaPlayer]

I'm trying to develop a little application in which you can load a mp3 file and play it in variable speeds! (I know it already exists :-) )
I'm using Qt and C++. I already have the basic player but I'm stuck with the rate thing, because I want to change the rate smoothly (like in Mixxx) without stopping the playback! The QMediaPlayer always stops if I change the value and creates a gap in the sound. Also I don't want the pitch to change!
I already found something called "SoundTouch" but now I'm completely clueless what to do with it, how to process my mp3 data and how to get it to the player! The "SoundTouch" Library is capable of doing what I want, i got that from the samples on the homepage.
How do I have to import the mp3 file, so I can process it with the SoundTouch functions
How can I play the output from the SoundTouch function? (Perhaps QMediaPlayer can do the job?)
How is that stuff done live? I have to do some kind of stream I guess? So I can change the speed during play and keep on playing without gaps. Graphicaly in my head it has to be something that sits between the data and the player, where all data has to go through live, with a small buffer (20-50 ms or so) behind to avoid gaps during processing future data.
Any help appreciated! I'm also open to any another solution then "SoundTouch" as long as I can stay with Qt/C++!
(Second thing: I want to view a waveform overview aswell as moving part of it (around actual position of the song), so I could also use hints on how to get the waveform data)
Thanks in advance!
As of now (Qt 5.5) this is impossible to do with QMediaPlayer only. You need to do the following:
Decode the audio using GStreamer, FFMpeg or (new) QAudioDecoder: http://doc.qt.io/qt-5/qaudiodecoder.html - this will give you raw PCM stream;
Apply SoundTouch or some other library to this raw data to change the pitch. If GPL is ok, take a look at http://nsound.sourceforge.net/examples/index.html, if you develop proprietary stuff, STK might be a better choice: https://ccrma.stanford.edu/software/stk/
Output the modified data into audio device by using QAudioOutput.
This strategy uses Qt as much as possible, and brings you the best platform coverage (you still lose Android though as it does not support QAudioOutput)

Cocos2d-iphone is unable to play my .caf files upon conversion

I am having the same problem as this question: SimpleAudioEngine, playing .caf files (which is closed)
The solution, however, does not work.
I have Battle.wav, which works just fine with
[[SimpleAudioEngine sharedEngine]playBackgroundMusic:#"Battle.wav"];
Now, I want to convert it to .caf. According to the answer in that question, I should use this terminal line:
afconvert -f caff -d LEI16#22050 Battle.wav
The resulting file, however, does not work. When I use:
[[SimpleAudioEngine sharedEngine]playBackgroundMusic:#"Battle.caf"];
Xcode displays the following message when I do that:
AudioStreamBasicDescription: 2 ch, 44100 Hz, 'lpcm' (0x00000C2C)
8.24-bit little-endian signed integer, deinterleaved
I don't really know what does that mean. What I do know, however, is that the file indeed does not sound.
The question at playing a .caf file: works fine in simulator but not in iPhone doesn't help either (and the problem they have doesn't seem the same basic thing I'm having here).
Cocos2d-iphone 1.0.1, iPhone 4.
Try opening and exporting the file as AIFF (.caf) using Audacity (free). That should work if it is a file format problem.
One thing that may be an issue is the caf file using 2 channels. It may be as simple as SimpleAudioEngine not supporting stereo sound effects. If you want background music, use .mp3 since they can be decoded in hardware, and are generally a lot smaller.

Get native video resolution of a video file

I'm currently writing some custom EVR for a Media Foundation player.
So far everything work, but i'm in need of finding the native resolution of the video file i'm rendering.
I try to use the IBasicFilter2 Interface to use the getVideoSize, get_VideoHeight or other get_SourceWidth etc... but it always return me a E_NOINTERFACE...
So do someone have an esay way of getting resolution of a video file? Even if it's with a nice light library...just the size nothing else...Windows manage to find it inside the file browser, but i'm totally unable to get it from code...
Thanks!
You can use IMediaDet in DirectShow to get information on the streams in a media file including the resolution of video streams.
There are come caveats though so you might want a backup method.
You need suitable DirectShow filters registered which understand the media file being examined. It's possible that you may have a filter installed that gives wrong results - e.g. an audio only filter is registered for a media type that ignores any video streams in the file.
It's currently deprecated with no indication on the MSDN reference page of what is replacing this functionality. It can also be a pain to build as the headers have been removed from the Windows SDK.
Here's one case in point where that method doesn't work...
Get MP4 stream lengths