Fmod constans (FMOD_HARDWARE) not found - c++

I am using FMOD in Ubuntu to play sound in my game.
I have the next
#include "SoundEngine.h"
bool SoundEngine::initSystem()
{
System_Create(&system);// create an instance of the game engine
system->init(32, FMOD_INIT_NORMAL, 0);// initialise the game engine with 32 channels
//load sounds
system->createSound("Media/NoMoreMagic.ogg", FMOD_HARDWARE, 0, &sound1);
sound1->setMode(FMOD_LOOP_OFF); /* drumloop.wav has embedded loop points which automatically makes looping turn on, */
/* so turn it off here. We could have also just put FMOD_LOOP_OFF in the above CreateSound call. */
system->playSound(FMOD_CHANNEL_FREE, sound1, false, 0);
return true;
}
And the SoundEngine.h is:
#ifndef SOUNDENGINE_H_
#define SOUNDENGINE_H_
#include "FMOD/inc/fmod.hpp" //fmod c++ header
#pragma comment( lib, "fmodex_vc.lib" ) // fmod library
using namespace FMOD;
class SoundEngine{
public:
bool initSystem(void);
private:
//FMod Stuff
System *system; //handle to FMOD engine
Sound *sound1, *sound2; //sound that will be loaded and played
};
#endif /* SOUNDENGINE_H_ */
THe problem is that FMOD_HARDWARE or FMOD_CHANNEL_FREE is not found.
Anyone knows where are they?
EDIT: Another thing is that the pragma comment throws a warning telling that ignore pragma. Maybe the problem has relation with the pragma? How can I fix the pragma?

I believe FMOD_HARDWARE is from the older FMOD Ex API. Are you using FMOD 5? In my calls to System::createSound I use FMOD_DEFAULT or FMOD_CREATESAMPLE (the latter will load the entire sound and decompress it in memory, to speed up playback) which seems to use the hardware well. For your playSound call, try
FMOD::Channel *channel;
system->playSound(sound1, NULL, false, &channel);

Related

Splitting ESP32 sketch into multiple threads and files

I'm working on a quite complex and large sketch for my ESP32 and I'm dividing it into threads and classes, splitting everything in different files. For sake of simplicity I'm gonna show you just the idea of my project setup.
For instance, I'm using a BME280 sensor to read temperature, humidity, and pressure values. Therefore, I created an header file called bme280.h and an associated cpp file called bme280.cpp. Here's the content of the two files.
bme280.h
#ifndef BME280_H
#define BME280_H
#include <Wire.h>
#include <Adafruit_Sensor.h>
#include <Adafruit_BME280.h>
#define SEALEVELPRESSURE_HPA (1013.25)
typedef struct {
float temperature;
float pressure;
float humidity;
}bmeData;
class BME280_sensor {
public:
BME280_sensor();
bmeData readBmeData();
private:
Adafruit_BME280 bme;
};
#endif
bme280.cpp
#include "bme280.h"
BME280_sensor::BME280_sensor() {
bool status;
status = bme.begin(0x76);
if (!status) {
Serial.println("Could not find a valid BME280 sensor, check wiring!");
while (1);
}
Serial.println("BME280 sensor correctly initialized!");
}
bmeData BME280_sensor::readBmeData() {
bmeData bmeValues;
bmeValues.temperature = bme.readTemperature();
bmeValues.pressure = bme.readPressure() / 100.0F;
bmeValues.humidity = bme.readHumidity();
return bmeValues;
}
This is basically how I'm using every sensor.
Now, my .ino file is busy doing some other job, so I used the pthread library for creating a different thread in charge of reading sensors values. Hence, my .ino file, before doing its job, starts this thread, which I named mainThread. Here's an example:
file.ino
#include <pthread.h>
#include "main.h"
void setup() {
delay(1000);
Serial.begin(115200);
Serial.println("Serial initialized");
pthread_t mainThreadRef;
int mainValue;
mainValue = pthread_create(&mainThreadRef, NULL, mainThread, (void*)NULL);
if (mainValue) {
Serial.println("Main thread error occured");
}
}
void loop() {
// Some other job
}
The main thread, instead, is implemented using a main.h file and a main.cpp file. Here's an example:
main.h
#ifndef MAIN_H
#define MAIN_H
#include <Arduino.h>
void *mainThread();
#endif
main.cpp
#include "main.h"
#include "bme280.h"
void *mainThread() {
BME280_sensor bme;
while (1) {
bmeData bmeValues = bme.readBmeData();
Serial.println(bmeValues.temperature);
Serial.println(bmeValues.humidity);
Serial.println(bmeValues.pressure);
delay(3000);
}
}
Now, I wonder if this whole structure of the project is good, because I'm facing weird values reading, like temperature over 100 or pressure under 0, and some other weird stuff. To be more precise:
Is it "safe" to have a thread acting as the main thread doing all the jobs?
Is it good to have a different class for each sensor that I am using or does it interfere with sensor readings?
Thank you all in advance for you help!
Is it "safe" to have a thread acting as the main thread doing all the jobs?
Yes. The thing where setup() and loop() functions get executed is also a thread. It's probably the first thread in the system, but otherwise there's no difference between it and the threads that you yourself create.
The hard part is not running an isolated process in its own thread - that's usually easy, and often a good idea. The hard part is getting data across different threads. I recommend Mastering the FreeRTOS Real Time Kernel for reading on how FreeRTOS threads (that's what the ESP32 is really using, pthreads is just a wrapper around it) work and how to communicate between them.
Is it good to have a different class for each sensor that I am using or does it interfere with sensor readings?
If you have one class per type of sensor which wraps the mundane details of how to talk to that sensor, then this is generally considered good design (encapsulation etc). But it depends, really. The devil is in the details. Note that the Arduino or Adafruit sensor libraries already do that anyway - they tend to provide a nice, simple interface that you can use without knowing the details of how it works. Don't bother wrapping those (unless you have a clear purpose).

which header file contains ThrowIfFailed() in DirectX 12

some part of code image, another part of code image, I am beginner to DirectX 12 (or Game Programming) and studying from Microsoft documentation. While using function ThrowIfFailed() I get an error from intellisense of vs2015 editor
This declaration has no storage class or type specifier.
Can anyone help.
As you are new to DirectX programming, I strongly recommend starting with DirectX 11 rather than DirectX 12. DirectX 12 assumes you are already an expert DirectX 11 developer, and is a quite unforgiving API. It's absolutely worth learning if you plan to be a graphics developer, but starting with DX 12 over DX 11 is a huge undertaking. See the DirectX Tool Kit tutorials for DX11 and/or DX12
For modern DirectX sample code and in the VS DirectX templates, Microsoft uses a standard helper function ThrowIfFailed. It's not part of the OS or system headers; it's just defined in the local project's Precompiled Header File (pch.h):
#include <exception>
namespace DX
{
inline void ThrowIfFailed(HRESULT hr)
{
if (FAILED(hr))
{
// Set a breakpoint on this line to catch DirectX API errors
throw std::exception();
}
}
}
For COM programming, you must check at runtime all HRESULT values for failure. If it's safe to ignore the return value of a particular DirectX 11 or DirectX 12 API, it will return void instead. You generally use ThrowIfFailed for 'fast fail' scenarios (i.e. your program can't recover if the function fails).
Note the recommendation is to use C++ Exception Handling (a.k.a. /EHsc) which is the default compiler setting in the VS templates. On the x64 and ARM platforms, this is implemented very efficiently without any additional code overhead. Legacy x86 requires some additional epilog/prologue code that the compiler creates. Most of the "FUD" around exception handling in native code is based on the experience of using the older Asynchronous Structured Exception Handling (a.k.a. /EHa) which severely hampers the code optimizer.
See this wiki page for a bit more detail and usage information. You should also read the page on ComPtr.
In my version of the Direct3D Game VS Templates on GitHub, I use a slightly enhanced version of ThrowIfFailed which you could also use:
#include <exception>
namespace DX
{
// Helper class for COM exceptions
class com_exception : public std::exception
{
public:
com_exception(HRESULT hr) : result(hr) {}
virtual const char* what() const override
{
static char s_str[64] = {};
sprintf_s(s_str, "Failure with HRESULT of %08X",
static_cast<unsigned int>(result));
return s_str;
}
private:
HRESULT result;
};
// Helper utility converts D3D API failures into exceptions.
inline void ThrowIfFailed(HRESULT hr)
{
if (FAILED(hr))
{
throw com_exception(hr);
}
}
}
This error is because some of your code are outside of any function.
Your error is just here :
void D3D12HelloTriangle::LoadPipeline() {
#if defined(_DEBUG) { //<= this brack is simply ignore because on a pre-processor line
ComPtr<ID3D12Debug> debugController;
if (SUCCEEDED(D3D12GetDebugInterface(IID_PPV_ARGS(&debugController)))) {
debugController->EnableDebugLayer();
}
} // So, this one closes method LoadPipeline
#endif
// From here, you are out of any function
ComPtr<IDXGIFactory4> factory;
ThrowIfFailed(CreateDXGIFactory1(IID_PPV_ARGS(&factory)));
So to correct it :
void D3D12HelloTriangle::LoadPipeline() {
#if defined(_DEBUG)
{ //<= just put this bracket on it's own line
ComPtr<ID3D12Debug> debugController;
if (SUCCEEDED(D3D12GetDebugInterface(IID_PPV_ARGS(&debugController)))) {
debugController->EnableDebugLayer();
}
} // So, this one close method LoadPipeline
#endif
// From here, you are out of any function
ComPtr<IDXGIFactory4> factory;
ThrowIfFailed(CreateDXGIFactory1(IID_PPV_ARGS(&factory)));

Kernel module periodically calling user space program

I want to call a user space program from a kernel module periodically.But the kernel program is freezing the system, while I try to load it.
following is the program,
#include <linux/module.h> /* Needed by all modules */
#include <linux/kernel.h> /* Needed for KERN_INFO */
#include <linux/init.h> /* Needed for the macros */
#include <linux/jiffies.h>
#include <linux/time.h>
#include <linux/proc_fs.h>
#include <asm/uaccess.h>
#include <linux/hrtimer.h>
#include <linux/sched.h>
#include <linux/delay.h>
#define TIME_PERIOD 50000
static struct hrtimer hr_timer;
static ktime_t ktime_period_ns;
static enum hrtimer_restart timer_callback(struct hrtimer *timer){
char userprog[] = "test.sh";
char *argv[] = {userprog, "2", NULL };
char *envp[] = {"HOME=/", "PATH=/sbin:/usr/sbin:/bin:/usr/bin", NULL };
printk("\n Timer is running");
hrtimer_forward_now(&hr_timer, ktime_period_ns);
printk("callmodule: %s\n", userprog);
call_usermodehelper(userprog, argv, envp, UMH_WAIT_PROC);
return HRTIMER_RESTART;
}
static int __init timer_init() {
ktime_period_ns= ktime_set( 0, TIME_PERIOD);
hrtimer_init ( &hr_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL );
hr_timer.function = timer_callback;
hrtimer_start( &hr_timer, ktime_period_ns, HRTIMER_MODE_REL );
return 0;
}
static int __exit timer_exit(){
int cancelled = hrtimer_cancel(&hr_timer);
if (cancelled)
printk(KERN_ERR "Timer is still running\n");
else
printk(KERN_ERR "Timer is cancelled\n");
}
module_init(timer_init);
module_exit(timer_exit);
MODULE_LICENSE("GPL");
test.sh is a script which just echoes a comment.
I have tested the call_usermodehelper part and timer part individually and it is working fine. But while I am combining the two codes, the system hangs.
Can anybody please help me to solve the problem.
Quick look around strongly suggests that hrtimer callbacks are executed from an irq context, which is expected - how else are you going to get high resoluton?
But this also means you must no block, while your callback can block due to call_usermodehelper regardless of NOWAIT passed.
So it seems you are testing your module on a kernel with debugging disabled, which is fundamentally wrong.
But this is less relevant as the thing you are trying to achieve in the first place looks fundamentally wrong.
I can only recommend you elaborate what is the actual problem. There is absolutely now way that forking + execing has anything to do with something requiring a high resolution timer.

Switch off Memory Leak Detection in boost.Test

I'm currently using boost.Test and I'm wondering if it might be possible to switch off the Memory Leak Detection, if one compiles in DEBUG Mode.
I don't want to use the command line parameter switch --detect_memory_leak=0. I'm looking for a kind of #define parameter, that switches off the memory leak detection feature in DEBUG mode.
It would be also suitable for me to switch off the memory detection feature by defining a certain compiler switch. I'm currently using Microsoft Visual Studio 2010.
#define BOOST_TEST_DETECT_MEMORY_LEAK 0 // Preprocesser switch I'm looking for!
#define BOOST_TEST_MODULE MyUnitTest
#include <boost/test/included/unit_test.hpp>
BOOST_AUTO_TEST_SUITE(MySuite);
BOOST_AUTO_TEST_CASE(MyUnitTest) {
/// Following code has a memory leak
/// ....
}
BOOST_AUTO_TEST_SUITE_END()
Just found out that possibly the best way to turn off the detection of memory leaks is to include the following code snippet into one's tests.
#include <boost/test/debug.hpp>
struct GlobalFixture {
GlobalFixture() {
boost::debug::detect_memory_leaks(false);
}
~GlobalFixture() { }
};
BOOST_GLOBAL_FIXTURE(GlobalFixture);
Still, I was not able to switch off and to switch on the detection of memory leaks for single tests.
You can directly set the environment variable BOOST_TEST_DETECT_MEMORY_LEAK to 0 or use putenv :
#include <cstdlib>
//...
BOOST_AUTO_TEST_CASE(MyUnitTest) {
putenv("BOOST_TEST_DETECT_MEMORY_LEAK=0");
//...
}
Edit
As you're using visual studio 2010, you can try _putenv or _wputenv :
#include <stdlib.h>
//...
BOOST_AUTO_TEST_CASE(MyUnitTest) {
_putenv("BOOST_TEST_DETECT_MEMORY_LEAK=0");
//...
}
Otherwise, I found a function detect_memory_leaks in the Boost documentation but it seems to be only available on recent boost version.
_CrtSetDbgFlag(0);
is the only thing that mostly worked for me. Leak messages came out, but not enough to make me wait.
Here's some code detailing everything I tried:
struct GlobalFixture
{
GlobalFixture()
{
// This doesn't seem to do anything
// boost::debug::detect_memory_leaks(false);
// This either
//_putenv("BOOST_TEST_DETECT_MEMORY_LEAK=0");
// This total hack also does nothing
// using namespace boost::unit_test::runtime_config;
// const_cast<boost::runtime::arguments_store&>(argument_store()).set(btrt_detect_mem_leaks, 0);
// This gets rid of most of the messages
_CrtSetDbgFlag(0);
}
};
Why not use the _DEBUG macro ?
#ifdef _DEBUG
#define BOOST_TEST_DETECT_MEMORY_LEAK 0
#endif

ReadFile Win32 API

i want to read a file.. but.. when i debug my program it runs but a pop up appears and says system programming has stopped working and in the console, it`s written that Press enter to close the program. my code is ::
// System Programming.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "iostream"
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
HANDLE hin;
HANDLE hout;
TCHAR buff[20]= {'q','2','3'};
TCHAR buff2[20]={'a','v'};
hin = CreateFile(_T("Abid.txt"),GENERIC_WRITE,0,NULL,OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL,0);
if(hin == INVALID_HANDLE_VALUE)
{
cout<<"error";
}
WriteFile(hin,buff,40,0,NULL);
CloseHandle(hin);
hout = CreateFile(_T("Abid.txt"),GENERIC_READ,0,NULL,OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL,0);
if(hout == INVALID_HANDLE_VALUE)
{
cout<<"error";
}
ReadFile(hout,buff2,40,0,NULL);
CloseHandle(hout);
return 0;
}
According to MSDN, lpNumberOfBytesWritten paremeter can be NULL only when the lpOverlapped parameter is not NULL. So the calls should be
DWORD nWritten;
WriteFile(hin, buff, 40, &nWritten, NULL);
and
DWORD nRead;
ReadFile(hout, buff2, 40, &nRead, NULL);
Also, rename hin and hout.
Others have already answered your question. This is about the code.
// Your code:
// System Programming.cpp : Defines the entry point for the console application.
//
Just remove that comment. It isn't true. :-) The entry point for your program is where the machine code starts executing, and with the Microsoft toolchain it's specified by the /entry linker option.
Note that Microsoft's documentation is generally confused about entry points, e.g. it has always, one way or other, documented incorrect signature for entry point.
It's one of the most infamous Microsoft documentation errors, and, given that it's persisted, in various forms, for 15 years, I think it says something (not sure exactly what, though).
// Your code:
#include "stdafx.h"
You don't need this automatically generated header. Instead use <windows.h>. A minimal way to include <windows.h> for your program would be
#undef UNICODE
#define UNICODE
#include <windows.h>
For C++ in general you'll want to also make sure that STRICT and NOMINMAX are defined before including <windows.h>. With modern tools at least STRICT is defined by default, but it doesn't hurt to make sure. Without it some of declarations won't compile with a C++ compiler, at least not without reinterpret casts, e.g. dialog procedures.
// Your code:
#include "iostream"
using namespace std;
Almost OK.
Do this:
#include <iostream>
using namespace std;
The difference is where the compiler searches for headers. With quoted name it searches in some additional places first (and that's all that the standard has to say about it). With most compilers those additional places include the directory of the including file.
// Your code:
int _tmain(int argc, _TCHAR* argv[])
Oh no! Don't do this. It's a Microsoft "feature" that helps support Windows 9.x. And it's only relevant when you're using MFC linked dynamically and you're targeting Windows 9.x; without MFC in the picture you'd just use the Microsoft Unicode layer.
Area you really targeting Windows 9.x with an app using dynamically linked MFC?
Instead, do ...
int main()
... which is standard, or use the Microsoft language extension ...
int wMain( int argc, wchar_t* argv[] )
... if you want to handle command line arguments the "easy" way.
// Your code:
{
HANDLE hin;
HANDLE hout;
TCHAR buff[20]= {'q','2','3'};
TCHAR buff2[20]={'a','v'};
The TCHAR stuff is just more of that MFC in Windows 9.x support stuff.
Apart from being totally unnecessary (presumably, you're not really targeting Windows 9.x, are you?), it hides your intention and hurts the eyes.
Did you mean ...
char buff[20] = {'q', '2', '3'};
... perhaps?
// Your code:
hin = CreateFile(_T("Abid.txt"),GENERIC_WRITE,0,NULL,OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL,0);
if(hin == INVALID_HANDLE_VALUE)
{
cout<<"error";
}
As others have mentioned, OPEN_EXISTING isn't logical when you're creating the file, and the count pointer argument can't be 0 for your usage.
When using <windows.h>, with UNICODE defined as it should be, the filename argument should be specifed as L"Abid.txt".
Cheers & hth.,
The problem is that you're passing a NULL pointer in for the lpNumberOfBytesWritten/lpNumberOfBytesread parameter. While this is an optional parameter, there's a condition:
This parameter can be NULL only when the lpOverlapped parameter is not NULL
Also, you may have the size of your buffers wrong:
WriteFile(hin,buff,40,0,NULL); // says that buff has 40 bytes
ReadFile(hout,buff2,40,0,NULL); // says that buff2 has 40 bytes
But if you're compiling for ANSI instead of UNICODE, these will only be 20 bytes in size.
You should probably use sizeof(buff) and sizeof(buff2) instead.
Assuming your initial code attempts to create the file as a new file, then you cannot use OPEN_EXISTING, you have to use OPEN_ALWAYS (or some other creational variant) on this call.
The OPEN_EXISTING usage for readback will be OK.
btw once this is fixed the WriteFile calls causes an access violation, as you are trying to write more bytes that your array contains.