Simple sound play in FMOD - c++

I am starting using FMOD API and I have got problem with sound playing. I've used tutorial from this site: http://glasnost.itcarlow.ie/~powerk/audio/AddFMODtoaproject.html and only think I have got is sound cracking.
This is the code which I am using in my OpenGL init function:
FMOD::System_Create(&system);// create an instance of the game engine
system->init(32, FMOD_INIT_NORMAL, 0);
system->createSound("sound.wav", FMOD_HARDWARE, 0, &sound1);
sound1->setMode(FMOD_LOOP_OFF);
system->playSound(FMOD_CHANNEL_FREE, sound1, false, 0);
Does anyone have any idea what is wrong? Or mayby there is another way for that.

Firstly make sure you check the return code of all functions, ensure it is FMOD_OK.
Second, you need to call System::update regularly, once per frame for FMOD house keeping.
Regarding your issue though, what platform are you on?
Crackling generally means the hardware cannot keep up, to fix it you can increase the amount of buffering FMOD does. This is controlled via System::setDSPBufferSize, try increasing the numBuffers count. You can determine the current values with System::getDSPBufferSize, also make sure you call System::setDSPBufferSize before System::init for the new values to take effect.

I don't know If you are calling FMOD::System::Update() .. you need to call this at least once per frame

Related

glfwSwapBuffers really slow (no vysnc)

I made a basic opengl program and opened it up and I was only getting 2400fps with dips to 700fps in release mode, and I was really confused so I took out everything in the main loop till the code looks like below
while (true)
{
glfwSwapBuffers(window);
}
and now I'm only getting 3400-4000fps (I switched to release mode).
For a bit of context, I've made a game in DirectX 11 where when nothing is drawing it gets 8000fps and that's with input and game logic not an empty loop.
I've tried compiling my own glfw and using precompiled binaries. Im thinking that maybe I need to figure out how to build glfw as apart of my project so I can get more optimization.
I'm really confused, I want to do some heavy stuff in this game but i'm already getting 2-4x less performance when nothing is going on.
Last second addition:
People have talked about glfwswapbuffers having low performance on other threads but in all those cases they are using vysnc. (im using glfwSwapInterval(0))
There might be a multiple reasons to impact the performance of glfwSwapBuffers. Since it works asynchronously, performance might be reduced by synchronizations as v-sync, or monitor refresh rate (60Hz?). Usually you want your engine to be in sync with other processes (even if they are a limiting factor). You might also want to try glfwSwapInterval(0).

DX9 CreateTexture/CreateSprite/Present Freeze

I use dx9 for a few small things here as I've gotten used to it and it does it's job pretty well, but lately I've been facing quite an issue that no matter how much I try look into, I can never find a solution for.
As the title states, I sometimes freeze the whole program and I'm forced to close it on calling CreateTexture, CreateSprite and sometimes from the device's Present which I can only guess stems from the last 2 ones. Thing is, there seems to be some kind of spots that whenever I try call any of the above functions for it freezes without fail, but without creating a texture at all and simply using a plain rectangle I'm 100% fine. I've tried going through all the memory pools of CreateTexture as it could perhaps been any of that but it still happens. For reference I call these functions like so
m_pDirect3DDevice->CreateTexture(m_nImageX, m_nImageY, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &m_pNewTexture.m_pTexture, NULL);
D3DXCreateSprite(m_pDirect3DDevice, &m_pNewTexture.m_pSprite);
m_pDirect3DDevice->Present(NULL, NULL, NULL, NULL);
Any help or suggestions at all are greatly appreciated!

SDL game loop is dropping frames because of SDL_GL_SwapWindow

I'm just trying to make an empty game loop that doesn't lag!
My loop does basically nothing, yet sometimes it lags enough to drop frames (I'm trying to run at 60fps)
I traced the problem to SDL_GL_SwapWindow. I made sure vsync is turned off.
Most of the time SDL_GL_SwapWindow(window); takes <1ms. But sometimes it can take long enough to drop frames. Is this normal? I can't believe my raw C++ empty game loop is sometimes dropping frames!
My code doesn't do anything interesting, I've tried tweaking it quite a bit, but I've seen no improvement. You can see it all here http://pastebin.com/GpLAH8SZ
P.S. I'm on a decent gaming desktop!
I think it is the OS, which may not schedule you 100% of time.
You can change the msdn : process class. But there is going to be intervals where windows does not have resources to keep running your code, and keep running.

i2c-dev slowing down a program

I am running a simple c/c++ code on a Raspberry Pi 2 with Raspbian kernel version 4.1.6-v7+ in order to view the thermal images from my new FLIR Lepton camera. I also want to see the actual temperature of the object I am pointing it at, but as the temperature is expressed as relative to the internal temperature of the camera, I need to call a function
lepton_temperature()
which requires
i2c-dev
module to be activated. When I activate it and run the function the program slows down from around 9fps to around two frames per minute. I didn't really modify anything in the provided code, so I don't understand why that is happening. Here's the function:
int lepton_temperature() {
if(!_connected) {
lepton_connect();
}
result = ((LEP_GetSysFpaTemperatureKelvin(&_port, &fpa_temp_kelvin)));
return ( fpa_temp_kelvin);
}
Without i2c-dev turned on the program works normally, but of course then I am getting a zero instead the temperature value. Anyone maybe has an idea on what is going on and how to solve it/make it faster?
It might sound obvious, but your question suggests to me that you overlooked it: use a separate thread for the lepton_temperature call.
Turns out Alex was right, as i2c commands are done by ioctl which is synchronous, using the command after each loaded frame was making the program too slow. I didn't consider it as every pixel value of every frame is calculated according to that temperature, so I was sure that can't possibly be the case. Turns out I was wrong.
Thanks to everyone anyway, sorry for posting a question without checking a pretty obvious solution first!

Why does my DirectInput8 stack overflow?

The overall program is too complex to display here. Basically, just pay attention to the green highlights in my recent git commit. I am very new to DirectInput, so I expect I've made several errors. I have very carefully studied the MSDN documentation, so I promise I'm not just throwing this out there and stamping FIX IT FOR ME on it. :)
Basically, I think I have narrowed down my problem to the area of code around Engine::getEvent (line 238+). I do not understand how these functions work, and I've messed with certain pieces to achieve different results. My goal here is to simply read in keyboard events directly and output those raw numbers to the screen (I will deal with the numbers' meaning later). The problem here relates to KEYBOARD_BUFFER_SIZE. If I make it small, the program seems to run fine, but it outputs no events. If I make it big, it runs a bit better, but it starts to slow down and then freeze (the OpenGL window just has a rotating color cube). How do I properly capture keyboard events?
I checked the return values on all the setup steps higher in the code. They all return DI_OK just fine.
Your code seems to be okay (according to this tutorial, which I have used in the past). The use of several stack-based arrays is questionable, but shouldn't be too much of an issue (unless you start having lots of concurrent getEvent calls running).
However, your best bet would be to stop using DirectInput and start using Windows Raw Input. It's best to make this switch early (ie, now) rather than realise later on that you really need to use something other than DI to get the results you want.