Can't assert facts in clips embedded application - c++

I'm trying to assert a new fact in CLIPS in embedded application.
I tried two ways:
- The first using assert as in the example in page 74 in the advanced programming guide.
- The second way is using assert-string.
I tried the each way alone and also the two ways together.
I'm using RUN_TIME module. My code outputs the right constructs (defrules and deftemplates) but the new fact is not asserted. Only initial-fact is there. I don't know why!
Here is my code:
#include "clips.h"
int main()
{
void *theEnv, *newFact, *templatePtr;
DATA_OBJECT theValue;
extern void *InitCImage_1();
theEnv = InitCImage_1();
EnvReset(theEnv);
// One way
templatePtr = EnvFindDeftemplate(theEnv, "Navigation");
newFact = EnvCreateFact(theEnv, templatePtr);
if (newFact == NULL) return -1;
theValue.type = SYMBOL;
theValue.value = EnvAddSymbol(theEnv, "Auto");
EnvPutFactSlot(theEnv, newFact, "FlightStatus", &theValue);
EnvAssert(theEnv, newFact);
// The other way
EnvAssertString(theEnv, "(Navigation (FlightStatus Auto))");
EnvRun(theEnv,-1);
EnvListDeftemplates(theEnv, "stdout", NULL);
EnvListDefrules(theEnv, "stdout", NULL);
EnvListDeffacts(theEnv, "stdout", NULL);
}
What is wrong in my code?

Use:
EnvFacts(theEnv,"stdout",NULL,-1,-1,-1);
Rather than:
EnvListDeffacts(theEnv, "stdout", NULL);
Deffacts are constructs that define a list of facts to be asserted when a (reset) command is performed. There is a pre-defined initial-facts deffacts that asserts the (initial-fact) when a reset is performed. That's what you're seeing listed when you call EnvListDeffacts. You want to call EnvFacts instead to see the facts that have actually been asserted (whether created by a deffacts after a reset or directly using assert).

Related

PrintDlgEx invalid argument, while PrintDlg works

Problem: I need to get PrintDlgEx working for my project, but no combination of options or arguments works for me. It gives E_INVALIDARG for any combinations of options, as the ones I copied from Microsoft samples or other online samples.
Replacing PRINTDLGEX with PRINTDLG and PrintDlgEx with PrintDlg (and eliminating the group of options only from PRINTDLGEX) works fine.
Unfortunately I need PrintDlgEx, because I really need the Apply button, to change printers or property sheet without printing, for design and preview.
Please help me find why I can't get the dialog to show.
Code: while I simplified pieces, like what should happen on successful return, or setting DEVMODE and DEVNAMES, I tried this function exactly, with the same result: Invalid Argument.
#include <QDebug>
#include <QWindow>
#include <windows.h>
void showPrintDialog()
{
// Simplifying the setup: real code passes in a QWidget *
QWidget *caller = this;
// Not returning a value or doing any work. I just want the dialog to pop up for now
// Create the standard windows print dialog
PRINTDLGEX printDialog;
memset(&printDialog, 0, sizeof(PRINTDLGEX));
printDialog.lStructSize = sizeof(PRINTDLGEX);
printDialog.Flags = PD_RETURNDC | // Return a printer device context. Without this, HDC may be undefined
PD_USEDEVMODECOPIESANDCOLLATE |
PD_NOSELECTION | // Don't allow selecting individual document pages to print
PD_NOPAGENUMS | // Disables some boxes
PD_NOCURRENTPAGE | // Disables some boxes
PD_NONETWORKBUTTON | // Don't allow networking (but it show "Find printer") so what does this do ?
PD_HIDEPRINTTOFILE; // Don't allow print to file
// Only on PRINTDLGEX
// Theis block copied from https://learn.microsoft.com/en-us/windows/win32/dlgbox/using-common-dialog-boxes?redirectedfrom=MSDN
// I have tried multiple combinations of options, including none, I really don't want any of them
printDialog.nStartPage = START_PAGE_GENERAL;
printDialog.nPageRanges = 1;
printDialog.nMaxPageRanges = 10;
LPPRINTPAGERANGE pageRange = (LPPRINTPAGERANGE) GlobalAlloc(GPTR, 10 * sizeof(PRINTPAGERANGE));
printDialog.lpPageRanges = pageRange;
printDialog.lpPageRanges[0].nFromPage = 1;
printDialog.lpPageRanges[0].nToPage = 1;
printDialog.Flags2 = 0;
printDialog.ExclusionFlags = 0;
printDialog.dwResultAction = 0; // This will tell me if PRINT
// Rest of options are also on PRINTDLG
printDialog.nMinPage = 1;
printDialog.nMaxPage = 10;
// The only options I really need
printDialog.nCopies = 1;
printDialog.hDevMode = Q_NULLPTR; // which will be better once this works
printDialog.hDevNames = Q_NULLPTR; // which will be better once this works
printDialog.hwndOwner = reinterpret_cast<HWND>(caller->windowHandle()->winId());
// Calling...
int result = PrintDlgEx(&printDialog);
qDebug() << (result == E_INVALIDARG ? "Invalid Argument\n" : "Success\n");
// It always is E_INVALIDARG
// Cleanup
if (printDialog.hDevMode)
GlobalFree(printDialog.hDevMode);
if (printDialog.hDevNames)
GlobalFree(printDialog.hDevNames);
if (printDialog.hDC)
DeleteDC(printDialog.hDC);
}
Platform: Windows 10, latest update;
Qt version: 5.12.7 or higher
(since in VM I have 5.15.1)
The fact that I am running in Qt should be irrelevant, since this is all WIN API, beyond the c++ version (11)
I can make your example work if I remove PD_NONETWORKBUTTON flag.
Please note that while it is documented for PRINTDLGA struct, it is NOT listed in PRINTDLGEXA
NOTE: I did get the same error with that flag.

LLVM API: correct way to create/dispose

I'm attempting to implement a simple JIT compiler using the LLVM C API. So far, I have no problems generating IR code and executing it, that is: until I start disposing objects and recreating them.
What I basically would like to do is to clean up the JIT'ted resources the moment they're no longer used by the engine. What I'm basically attempting to do is something like this:
while (true)
{
// Initialize module & builder
InitializeCore(GetGlobalPassRegistry());
module = ModuleCreateWithName(some_unique_name);
builder = CreateBuilder();
// Initialize target & execution engine
InitializeNativeTarget();
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
DisposeBuilder(builder);
// DisposeModule(module); //--> Commented out: Deleted by execution engine
Shutdown();
}
However, this doesn't seem to be working correctly: the second iteration of the loop I get a pretty bad error...
So to summarize: what's the correct way to destroy and re-create the LLVM API?
Posting this as Answer because the code's too long. If possible and no other constraints, try to use LLVM like this. I am pretty sure the Shutdown() inside the loop is the culprit here. And I dont think it would hurt to keep the Builder outside, too. This reflects well the way I use LLVM in my JIT.
InitializeCore(GetGlobalPassRegistry());
InitializeNativeTarget();
builder = CreateBuilder();
while (true)
{
// Initialize module & builder
module = ModuleCreateWithName(some_unique_name);
// Initialize target & execution engine
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
}
DisposeBuilder(builder);
Shutdown();
/* program init */
LLVMInitializeNativeTarget();
LLVMInitializeNativeAsmPrinter();
LLVMInitializeNativeAsmParser();
LLVMLinkInMCJIT();
ctx->context = LLVMContextCreate();
ctx->builder = LLVMCreateBuilderInContext(ctx->context);
LLVMParseBitcodeInContext2(ctx->context, module_template_buf, &module) // create module
do IR code creation
{
function = LLVMAddFunction(ctx->module, "my_func")
LLVMAppendBasicBlockInContext(ctx->context, ...
LLVMBuild...
...
}
optional optimization
{
LLVMPassManagerBuilderRef pass_builder = LLVMPassManagerBuilderCreate();
LLVMPassManagerBuilderSetOptLevel(pass_builder, 3);
LLVMPassManagerBuilderSetSizeLevel(pass_builder, 0);
LLVMPassManagerBuilderUseInlinerWithThreshold(pass_builder, 1000);
LLVMPassManagerRef function_passes = LLVMCreateFunctionPassManagerForModule(ctx->module);
LLVMPassManagerRef module_passes = LLVMCreatePassManager();
LLVMPassManagerBuilderPopulateFunctionPassManager(pass_builder, function_passes);
LLVMPassManagerBuilderPopulateModulePassManager(pass_builder, module_passes);
LLVMPassManagerBuilderDispose(pass_builder);
LLVMInitializeFunctionPassManager(function_passes);
for (LLVMValueRef value = LLVMGetFirstFunction(ctx->module); value;
value = LLVMGetNextFunction(value))
{
LLVMRunFunctionPassManager(function_passes, value);
}
LLVMFinalizeFunctionPassManager(function_passes);
LLVMRunPassManager(module_passes, ctx->module);
LLVMDisposePassManager(function_passes);
LLVMDisposePassManager(module_passes);
}
optional for debug
{
LLVMVerifyModule(ctx->module, LLVMAbortProcessAction, &error);
LLVMPrintModule
}
if (LLVMCreateJITCompilerForModule(&ctx->engine, ctx->module, 0, &error) != 0)
my_func = (exec_func_t)(uintptr_t)LLVMGetFunctionAddress(ctx->engine, "my_func");
LLVMRemoveModule(ctx->engine, ctx->module, &ctx->module, &error);
LLVMDisposeModule(ctx->module);
LLVMDisposeBuilder(ctx->builder);
do
{
my_func(...);
}
LLVMDisposeExecutionEngine(ctx->engine);
LLVMContextDispose(ctx->context);
/* program finit */
LLVMShutdown();

Gtksourceviewmm syntax highlighting not working

I'm trying to use the C++ wrapper gtksourceview, I made this a long time ago and I remember that it was working, but now everything works except the higlight syntax. And I'm not pretty sure what it is. I hope you can help me, I read a lot about this library on internet but I can find a solution. Here is a simple code. Thanks in advance.
#include "twindow.h"
#include <iostream>
TWindow::TWindow() {
add(m_SourceView);
m_SourceView.set_size_request(640, 480);
m_SourceView.set_show_line_numbers();
m_SourceView.set_tab_width(4);
m_SourceView.set_auto_indent();
m_SourceView.set_show_right_margin();
m_SourceView.set_right_margin_position(80);
m_SourceView.set_highlight_current_line();
m_SourceView.set_smart_home_end(gtksourceview::SOURCE_SMART_HOME_END_ALWAYS);
gtksourceview::init ();
Glib::RefPtr<gtksourceview::SourceBuffer> buffer = m_SourceView.get_source_buffer () ;
if (!buffer) {
std::cerr << "gtksourceview::SourceView::get_source_buffer () failed" << std::endl ;
}
buffer->begin_not_undoable_action();
buffer->set_text(Glib::file_get_contents("main.c"));
buffer->end_not_undoable_action();
buffer->set_highlight_syntax(true);
Glib::RefPtr<gtksourceview::SourceLanguageManager> language_manager = gtksourceview::SourceLanguageManager::create();
Glib::RefPtr<gtksourceview::SourceLanguage> language = gtksourceview::SourceLanguage::create();
language = language_manager->get_language("c");
buffer->set_language(language);
show_all_children();
}
So you want to use the c++ wrapper of gtksourceview, so I guess you want to use gtksourceviewmm.
Why you create the LanguageManager, you can use the default one.
If you using 3.2 of gtksourceviewmm, then look at the docs.
You should also check out this function.
So an example would look like;
Glib::ustring file_path = "/home/user/whatever/main.c";
Glib::RefPtr<Gsv::LanguageManager> language_manager = Gsv::LanguageManager::get_default();
Glib::RefPtr<Gsv::Language> language = language_manager->guess_language(file_path, Glib::ustring());
Another thing I want to mention is that you should create a buffer to show the content of the file, as in my projects I got a seg fault when I wanted to use get_source_buffer(), so it seems to be null by default.
Glib::RefPtr<Gsv::Buffer> buffer = Gsv::Buffer::create(language);
buffer->set_text(Glib::get_file_contents(file_path));
this->m_SourceView.set_source_buffer(buffer);

TaskDialogIndirect is returning an unusual error code

I'm using TaskDialogIndirect to display prompts to the user. Normally this works just fine, but sometimes, after the program has been running for a while, it begins returning an error code that the MSDN entry does not list as being one of the error codes this function can return.
0x80040001 OLE_E_ADVF "Invalid advise flags"
I have checked all the inputs to the function against previous successful calls in the same run. Aside from differences in the string to be displayed, they are identical. (the strings are even the same length.)
// create task dialog struct
TASKDIALOGCONFIG tdc;
ZeroMemory(&tdc, sizeof(TASKDIALOGCONFIG));
tdc.cbSize = sizeof(tdc);
tdc.dwFlags = (((dwMessageBoxFlags & MB_OKCANCEL) == MB_OKCANCEL) ? TDF_ALLOW_DIALOG_CANCELLATION : 0) | TDF_POSITION_RELATIVE_TO_WINDOW;
tdc.hwndParent = hwndOwner;
tdc.hInstance = LGetHInstance();
tdc.pszContent = usrText.wsz;
tdc.pButtons = _pButtons;
tdc.cButtons = nButtons;
tdc.pszMainIcon = pszTaskDialogIcon;
tdc.pszWindowTitle = usrCaption.wsz;
tdc.nDefaultButton = nDefaultButton;
// display it now
int iButton = 0;
BOOL b = 0;
HRESULT hResult = TaskDialogIndirect(&tdc, &iButton, NULL, &b);
NEW INFORMATION
At the same time that TaskDialogIndirect stops behaving correctly, ShellExecute also stops working, as does CreateFile.
This was actually caused by an event handle leak elsewhere. When the available handles ran out, no more API calls which needed to create a handle could succeed. They did return a rather odd set of error codes though, none of which was "out of handles".

OCIDefineByPos() with OCINumbers

I'm developing C++ application with OCI. I need to fetch data from the DB in to OCINumber. I'm confused on how to use the function OCIDefineByPos() with OCINumbers.
Can someone help me with this.
Given below is the part of the code where I call OCIDefineByPos function.
pStmt is a OCIStmt* and p_Error is OCIError*. Following function pointers are used.
1. pf_OCINumberFromReal = function pointer to OCINumberFromReal
2. pf_OCINumberToReal = function pointer to OCINumberToReal
3. pf_DefineByPos = function pointer to OCIDefineByPos.
OCINumber ocinTest;
long double dnum = 0.0;
(*pf_OCINumberFromReal)(p_Error, &dnum, sizeof(dnum), &ocinTest);
int iLength1 = sizeof(ocinTest);
OCIDefine* pDfn = NULL;
iRet = (*pf_DefineByPos)(pStmt, &pDfn, p_Error, 1, (dvoid *) &ocinTest,
(sword) iLength1, SQLT_NUM, (dvoid *) 0, (ub2 *)0,
(ub2 *)0, OCI_DEFAULT);
if (iRet == OCI_SUCCESS)
{
(*pf_OCINumberToReal)(p_Error, &ocinTest, sizeof(dnum), &dnum);
std::cout <<std::fixed << std::setprecision (10) << dnum << std::endl;
}
although iRet = OCI_SUCCESS it didn't fetch the value in the db correctly.(Value of the sql query defined using pStmt). dnum is 0.0 even after the call. This pf_DefineByPos is working fine for other data types such as int ,double etc.
So can someone help me to find the issue with this.
If your code is not completely taken out of context, then your missing several essential pieces.
A typical OCI program involves the following steps:
Prepare an SQL statement (OCIStmtPrepare)
Bind the result columns or output parameters to the variables that will receive the result values (OCIDefineByPos)
Execute the statement (OCIStmtExecute)
Do something with the result
It seems to me that you're skipping step 3 and expect some result right after step 2. But step 2 doesn't fetch any data. It just creates an association between the query result and your variables.
You need to DefineByPos for OCINumber with SQLT_VNU type rather than SQLT_NUM.