Asseration failed error while searching for Active Sound - c++

I want to get PlaybackTime from FActiveSound class. However i always get this error:
Assertion failed: IsInAudioThread()
when i call this:
FActiveSound* ActiveSound = AudioDevice->FindActiveSound(AudioComponent->GetAudioComponentID());
My question is why is that? It may be nooby question but I cannot make it work on myself.
EDIT:1
What I'm doing atm is this:
Binding delegate : AudioComponent->OnAudioPlaybackPercent.AddDynamic(this, &UAudioController::GimmePlaybackPosition);
then there is my function which looks like this:
void UAudioController::GimmePlaybackPosition(const USoundWave* SoundWave, float PlaybackPercent)
{
FAudioDevice* AudioDevice = GEngine->GetAudioDevice();
if (AudioDevice)
{
FActiveSound* ActiveSound;
}
//GEngine->AddOnScreenDebugMessage(-1, 15.0f, FColor::Red, TEXT("Delegate fire"));
}
And now I'm wondering, how can I get there my AudioComponent, to get PlaybackTime ;/

You have to run the code that accesses AudioDevice in the audio thread via FAudioThread::RunCommandOnAudioThread.
Here is a snippet used in AudioComponent.cpp:
if (FAudioDevice* AudioDevice = GetAudioDevice())
{
const uint64 MyAudioComponentID = AudioComponentID;
FAudioThread::RunCommandOnAudioThread([AudioDevice, MyAudioComponentID, InName, InFloat]()
{
FActiveSound* ActiveSound = AudioDevice->FindActiveSound(MyAudioComponentID);
if (ActiveSound)
{
ActiveSound->SetFloatParameter(InName, InFloat);
}
}, GET_STATID(STAT_AudioSetFloatParameter));
}

The error usually means that you tried to access AudioDevice from a wrong thread. Perhaps, you should post your code to get better answers.

Related

how to solve a violation Error using a method

I am getting such a weird violation error by using the getAt() method.
I use the method in this order:
OdDbBlockTablePtr w_kOdBlockTablePtr ;
bool lbCreateDefaults = false;
OdDb::MeasurementValue lkMeasurement = OdDb::kEnglish;
OdDbDatabasePtr pDb;
// Datenbank initialisieren
pDb = g_ExSystemServices.createDatabase(lbCreateDefaults,
lkMeasurement);
// TABLE - Hold Ptr
w_kOdBlockTablePtr = pDb->getBlockTableId().openObject(OdDb::kForWrite);
const wchar_t AcadBlockModelSpace[] = L "*MODEL_SPACE";
wstring lsModelSpace(AcadBlockModelSpace);
w_kOdModelSpaceBlockRecPtr = GetTableRecordIdFromName(lsModelSpace, (OdDbSymbolTablePtr&)w_kOdBlockTablePtr).safeOpenObject(OdDb::kForWrite);
OdDbObjectId K_TeighaClass::GetTableRecordIdFromName(wstring& psName, OdDbSymbolTablePtr& pkTablePtr)
{
OdDbObjectId lkId;
try {
OdString lsOdName = psName.c_str();
lkId = pkTablePtr->getAt(lsOdName);
}
catch (OdError& err)
{
DoOdError(err, NULL, NULL);
}
return lkId;
}
I would really appreciate if someone could help me.
Thanks in advance
That's not weird at all. If you hover your mouse over pkTablePtr, you will almost certainly find that it is nullptr (or the debugger might report this as 0).
There's not enough information in your question to say why this might be, but since you are already running under the debugger you can walk through your code and find out.
try ... catch won't catch a hard error like this, by the way. For that, you need __try ... __except (supported on Windows only).

How to fix : "FActorSpawnParameters is not defined" error in C++ in UE4?

I'm following a tutorial to make a game with UE4 and C++ and a error appear when I type the following line
FActorSpawnParameters params;
It says that the identifier FActorSpawnParameters is not defined.
I've tried to modify some of my code but it didn't change...
So I replace all the things in order.
void AUltimatePawn::Shoot()
{
if (BulletClass)
{
FActorSpawnParameters params;
params.SpawnCollisionHandlingOverride = ESpawnActorCollisionHandlingMethod::AlwaysSpawn;
params.bNoFail = true;
params.Owner = this;
params.Instigator = this;
FTransform BulletSpawnTransform;
BulletSpawnTransform.SetLocation(GetActorForwardVector() * 500.f + GetActorLocation());
BulletSpawnTransform.SetRotation(GetActorRotation().Quaternion());
BulletSpawnTransform.SetScale3D(FVector(1.f));
GetWorld()->SpawnActor<ABullet>(BulletClass, BulletSpawnTransform, params);
}
}
I just want you to tell me how to fix this error,
Thank's
Make sure that you include Runtime/Engine/Classes/Engine/World.h.
I extracted this information from the official API reference.

C++ Legacy Driver mongoDB Replicaset in Class of a DLL

I have built a dll which includes a class implementing the mongoDB replicaset operations. Here is a summary of the class.
#include "mongo/client/dbclient.h"
mongoimp::mongoimp() {
mongo::client::initialize();
}
mongoimp::~mongoimp() {
mongo::client::shutdown();
}
int mongoimp::queryTRecords() {
string errmsg;
vector<mongo::HostAndPort> hosts = { mongo::HostAndPort("xx-a0.yyyy.com:xxxxx"), mongo::HostAndPort("xx-a1.yyyy.com:xxxxx") };
static mongo::DBClientReplicaSet con("xx", hosts, 0);
con.connect();
con.auth("dbname", "username", "password", errmsg);
auto_ptr<DBClientCursor> cursor = con.query("dbname.t", BSONObj());
BSONObj response;
con.logout("xx", response);
if (cursor->more()) {
BSONObj recordnm = cursor->nextSafe();
return(recordnm.getIntField("lastid"));
} else return(-1);
}
The above code is working. But here are my questions:
1) With the above setting, I can do normal mongoDB operations with the dll but since my application needs to constantly update mongoDB data (close to real-time, up to hundreds a second), I am getting error (No valid replicaset instance servers found) when updating data.
2) Only the server needs to talk to the mongoDB database. So basically I just need one connection to the database. So I want to declare the mongo::DBClientReplicaSet con as a static global variable and connect to it in the class construct function. But it seemed I cannot do it. My application cannot run at all. With that, I constantly get the following error.
Assertion failed: px != 0, file C:\Boost\include\boost-1_62\boost/smart_ptr/scoped_ptr.hpp, line 105
Does anybody know how to solve the problem?
Below is the code I tried:
static mongo::DBClientReplicaSet con("xx", { mongo::HostAndPort("xx-a0.yyyy.com:xxxxx"), mongo::HostAndPort("xx-a1.yyyy.com:xxxxx") }, 0);
mongoimp::mongoimp() {
mongo::client::initialize();
string errmsg;
con.connect();
con.auth("dbname", "username", "password", errmsg);
}
mongoimp::~mongoimp() {
BSONObj response;
con.logout("xx", response);
mongo::client::shutdown();
}
int mongoimp::queryTRecords() {
auto_ptr<DBClientCursor> cursor = con.query("dbname.t", BSONObj());
if (cursor->more()) {
BSONObj recordnm = cursor->nextSafe();
return(recordnm.getIntField("lastid"));
} else return(-1);
}
3) Last question, I noticed there is mongo/client/dbclient_rs.h" file for replicaset. But it seemed I cannot use it. With that, I am getting error for initialize() and auto_ptr cursor. How can I use the file and take full advantage of replicaset features? How can I initialize the relica set if I can use "dbclient_rs.h"? How do I do query and fetch data in that case?
Thanks a lot in advance!
For question No. 2, I remembered the reason for the error:
You need to call mongo::client::initialize before you construct any driver objects, or BSON for that matter.
But how to make that global definition possible, I still need a solution.

LLVM API: correct way to create/dispose

I'm attempting to implement a simple JIT compiler using the LLVM C API. So far, I have no problems generating IR code and executing it, that is: until I start disposing objects and recreating them.
What I basically would like to do is to clean up the JIT'ted resources the moment they're no longer used by the engine. What I'm basically attempting to do is something like this:
while (true)
{
// Initialize module & builder
InitializeCore(GetGlobalPassRegistry());
module = ModuleCreateWithName(some_unique_name);
builder = CreateBuilder();
// Initialize target & execution engine
InitializeNativeTarget();
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
DisposeBuilder(builder);
// DisposeModule(module); //--> Commented out: Deleted by execution engine
Shutdown();
}
However, this doesn't seem to be working correctly: the second iteration of the loop I get a pretty bad error...
So to summarize: what's the correct way to destroy and re-create the LLVM API?
Posting this as Answer because the code's too long. If possible and no other constraints, try to use LLVM like this. I am pretty sure the Shutdown() inside the loop is the culprit here. And I dont think it would hurt to keep the Builder outside, too. This reflects well the way I use LLVM in my JIT.
InitializeCore(GetGlobalPassRegistry());
InitializeNativeTarget();
builder = CreateBuilder();
while (true)
{
// Initialize module & builder
module = ModuleCreateWithName(some_unique_name);
// Initialize target & execution engine
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
}
DisposeBuilder(builder);
Shutdown();
/* program init */
LLVMInitializeNativeTarget();
LLVMInitializeNativeAsmPrinter();
LLVMInitializeNativeAsmParser();
LLVMLinkInMCJIT();
ctx->context = LLVMContextCreate();
ctx->builder = LLVMCreateBuilderInContext(ctx->context);
LLVMParseBitcodeInContext2(ctx->context, module_template_buf, &module) // create module
do IR code creation
{
function = LLVMAddFunction(ctx->module, "my_func")
LLVMAppendBasicBlockInContext(ctx->context, ...
LLVMBuild...
...
}
optional optimization
{
LLVMPassManagerBuilderRef pass_builder = LLVMPassManagerBuilderCreate();
LLVMPassManagerBuilderSetOptLevel(pass_builder, 3);
LLVMPassManagerBuilderSetSizeLevel(pass_builder, 0);
LLVMPassManagerBuilderUseInlinerWithThreshold(pass_builder, 1000);
LLVMPassManagerRef function_passes = LLVMCreateFunctionPassManagerForModule(ctx->module);
LLVMPassManagerRef module_passes = LLVMCreatePassManager();
LLVMPassManagerBuilderPopulateFunctionPassManager(pass_builder, function_passes);
LLVMPassManagerBuilderPopulateModulePassManager(pass_builder, module_passes);
LLVMPassManagerBuilderDispose(pass_builder);
LLVMInitializeFunctionPassManager(function_passes);
for (LLVMValueRef value = LLVMGetFirstFunction(ctx->module); value;
value = LLVMGetNextFunction(value))
{
LLVMRunFunctionPassManager(function_passes, value);
}
LLVMFinalizeFunctionPassManager(function_passes);
LLVMRunPassManager(module_passes, ctx->module);
LLVMDisposePassManager(function_passes);
LLVMDisposePassManager(module_passes);
}
optional for debug
{
LLVMVerifyModule(ctx->module, LLVMAbortProcessAction, &error);
LLVMPrintModule
}
if (LLVMCreateJITCompilerForModule(&ctx->engine, ctx->module, 0, &error) != 0)
my_func = (exec_func_t)(uintptr_t)LLVMGetFunctionAddress(ctx->engine, "my_func");
LLVMRemoveModule(ctx->engine, ctx->module, &ctx->module, &error);
LLVMDisposeModule(ctx->module);
LLVMDisposeBuilder(ctx->builder);
do
{
my_func(...);
}
LLVMDisposeExecutionEngine(ctx->engine);
LLVMContextDispose(ctx->context);
/* program finit */
LLVMShutdown();

Odd ColdFusion Behavior--Abort Not Honored

Using ColdFusion 9.01, occasionally, we have observed an issue where an error may be occurring within a CFC function and when we attempt to add writeDump(foo); and abort; calls to debug the error ColdFusion does not honor those calls.
Example:
private void function index(Event)
{
var rc = Event.getCollection();
var prc = Event.getCollection(private=true);
/** NOT HONORED! **/
writeDump(var=rc);
abort;
prc.JSON = {};
prc.JSON.show = variables.APIProxy.call(
handler = 'shows'
,action = 'read'
,event = arguments.Event
/** THE ERROR IS OCCURING HERE **/
,params = { language=lcase(rc.language.getLanguage_Medium()), show=rc.show_name }
);
prc.JSON.showEpisodes = variables.APIProxy.call(
handler = 'episodes'
,action = 'index'
,event = arguments.Event
,params = { language=lcase(rc.language.getLanguage_Medium()), show=rc.show_name, detail=true }
);
prc.JSON.products = variables.APIProxy.call(
handler = 'products'
,action = 'index'
,event = arguments.Event
,params = { language=lcase(rc.language.getLanguage_Medium()), detail=true }
);
Event.addAssets(
'model/product.js
,model/show.js
,collection/product_mobile.js
,collection/show_mobile.js
,view/product_mobile.js
,view/productList.js
,view/show_mobile.js
,view/showList.js
,model/episode.js
,view/episode_mobile.js
,view/episodeList.js
,collection/episode_mobile.js
,collection/product_mobile.js
,mobile/episodeObject.css
,mobile/show.js
,mobile/show.css
,mobile/category.css
');
Event.setLayout('layout.mobile');
Event.setView("show/index_mobile");
return;
}
I believe we have successfully eliminated caching. I am curious if anyone else has encountered this.
Thank you.
Aaron
I'm guessing that the error is a parse error, not a true runtime error, so it gets thrown before the function actually executes. It's not actually skipping over your abort, it just fails to parse (or execute) the entire thing.
I'm not sure why you're getting a parse error there, but I do know the CF code that handles struct literals is somewhat flaky.
The issue was with the struct literals declared within the argument calls to a function.
i'm going to go out on a limb here and say that your issue might have something to do with this bug:
http://cfbugs.adobe.com/cfbugreport/flexbugui/cfbugtracker/main.html#bugId=86960
is there anything in your app that executes in the onRequestEnd() method?
it would be helpful to tell us what exactly is happening and/or the output you're getting when the issue happens.