Attempted to activate the 30 day trial, on purebasic version 5.70 LTS but using the posted code, returns the following error when checking syntax. (using 32bit version on 64bit w7 machine if that makes any difference)
I'm using:
Dim glob As new Chilkat.Global glob.UnlockBundle("Start my 30 day
trial")
and get the error below:
line 4" 'Dim' systax is: Dim Name(nb).
Dim is used to define arrays. To define an array of 3 strings, you write
Dim names.s(2)
The .s indicates that it is an array of strings.
The 2 indicates the highest index available. So, counting from 0, it means three slots : 0, 1, and 2.
According to the PureBasic module of the Chilkat website, to unlock the bundle, you should follow the example below.
IncludeFile "CkGlobal.pb"
Procedure ChilkatExample()
; The Chilkat API can be unlocked for a fully-functional 30-day trial by passing any
; string to the UnlockBundle method. A program can unlock once at the start. Once unlocked,
; all subsequently instantiated objects are created in the unlocked state.
;
; After licensing Chilkat, replace the "Anything for 30-day trial" with the purchased unlock code.
; To verify the purchased unlock code was recognized, examine the contents of the LastErrorText
; property after unlocking. For example:
glob.i = CkGlobal::ckCreate()
If glob.i = 0
Debug "Failed to create object."
ProcedureReturn
EndIf
success.i = CkGlobal::ckUnlockBundle(glob,"Anything for 30-day trial")
If success <> 1
Debug CkGlobal::ckLastErrorText(glob)
CkGlobal::ckDispose(glob)
ProcedureReturn
EndIf
status.i = CkGlobal::ckUnlockStatus(glob)
If status = 2
Debug "Unlocked using purchased unlock code."
Else
Debug "Unlocked in trial mode."
EndIf
; The LastErrorText can be examined in the success case to see if it was unlocked in
; trial more, or with a purchased unlock code.
Debug CkGlobal::ckLastErrorText(glob)
CkGlobal::ckDispose(glob)
ProcedureReturn
EndProcedure
Related
my project using QTcpSocket and the function setSocketDescriptor(). The code is very normal
QTcpSocket *socket = new QTcpSocket();
socket->setSocketDescriptor(this->m_socketDescriptor);
This coding worked fine most of the time until I ran a performance testing on Windows Server 2016, the crash occurred. I debugging with the crash dump, here is the log
0000004f`ad1ff4e0 : ucrtbase!abort+0x4e
00000000`6ed19790 : Qt5Core!qt_logging_to_console+0x15a
000001b7`79015508 : Qt5Core!QMessageLogger::fatal+0x6d
0000004f`ad1ff0f0 : Qt5Core!QEventDispatcherWin32::installMessageHook+0xc0
00000000`00000000 : Qt5Core!QEventDispatcherWin32::createInternalHwnd+0xf3
000001b7`785b0000 : Qt5Core!QEventDispatcherWin32::registerSocketNotifier+0x13e
000001b7`7ad57580 : Qt5Core!QSocketNotifier::QSocketNotifier+0xf9
00000000`00000001 : Qt5Network!QLocalSocket::socketDescriptor+0x4cf7
00000000`00000000 : Qt5Network!QAbstractSocket::setSocketDescriptor+0x256
In the stderr log, I see those logs
CreateWindow() for QEventDispatcherWin32 internal window failed (Not enough storage is available to process this command.)
Qt: INTERNAL ERROR: failed to install GetMessage hook: 8, Not enough storage is available to process this command.
Here is the function, where the code was stopped on the Qt codebase
void QEventDispatcherWin32::installMessageHook()
{
Q_D(QEventDispatcherWin32);
if (d->getMessageHook)
return;
// setup GetMessage hook needed to drive our posted events
d->getMessageHook = SetWindowsHookEx(WH_GETMESSAGE, (HOOKPROC) qt_GetMessageHook, NULL, GetCurrentThreadId());
if (Q_UNLIKELY(!d->getMessageHook)) {
int errorCode = GetLastError();
qFatal("Qt: INTERNAL ERROR: failed to install GetMessage hook: %d, %s",
errorCode, qPrintable(qt_error_string(errorCode)));
}
}
I did research and the error Not enough storage is available to process this command. maybe the OS (Windows) does not have enough resources to process this function (SetWindowsHookEx) and failed to create a hook, and then Qt fire a fatal signal, finally my app is killed.
I tested this on Windows Server 2019, the app is working fine, no crashes appear.
I just want to know more about the meaning of the error message (stderr) cause I don't really know what is "Not enough storage"? I think it is maybe the limit or bug of the Windows Server 2016? If yes, is there any way to overcome this issue on Windows Server 2016?
The error ‘Not enough storage is available to process this command’ usually occurs in Windows servers when the registry value is set incorrectly or after a recent reset or reinstallations, the configurations are not set correctly.
Below is verified procedure for this issue:
Click on Start > Run > regedit & press Enter
Find this key name HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters
Locate IRPStackSize
If this value does not exist Right Click on Parameters key and Click on New > Dword Value and type in IRPStackSize under the name.
The name of the value must be exactly (combination of uppercase and lowercase letters) the same as what I have above.
Right Click on the IRPStackSize and click on Modify
Select Decimal enter a value higher than 15(Maximum Value is 50 decimal) and Click Ok
You can close the registry editor and restart your computer.
Reference
After researching for a few days I finally can configure the Windows Server 2016 setting (registry) to prevent the crash.
So basically it is a limitation of the OS itself, it is called desktop heap limitation.
https://learn.microsoft.com/en-us/troubleshoot/windows-server/performance/desktop-heap-limitation-out-of-memory
(The funny thing is the error message is Not enough storage is available to process this command but the real problem came to desktop heap limitation. )
So for the solution, flowing the steps in this link: https://learn.microsoft.com/en-us/troubleshoot/system-center/orchestrator/increase-maximum-number-concurrent-policy-instances
I increased the 3rd parameter of SharedSection to 2048 and it fix the issue.
Summary steps:
Desktop Heap for the non-interactive desktops is identified by the third parameter of the SharedSection= segment of the following registry value:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems\Windows
The default data for this registry value will look something like the following:
%SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=1024,3072,512 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16
The value to be entered into the Third Parameter of the SharedSection= segment should be based on the calculation of:
(number of desired concurrent policies) * 10 = (third parameter value)
Example: If it's desired to have 200 concurrent policy instances, then 200 * 10 = 2000, rounding up to a nice memory number gives you 2048as the third parameter resulting in the following update to be made to the registry value:
SharedSection=1024,3072,2048
I am new to working with Promela and in particular SPIN. I have a model which I am trying verify and can't understand SPIN's output to resolve the problem.
Here is what I did:
spin -a untitled.pml
gcc -o pan pan.c
./pan
The output was as follows:
pan:1: VECTORSZ is too small, edit pan.h (at depth 0)
pan: wrote untitled.pml.trail
(Spin Version 6.4.5 -- 1 January 2016)
Warning: Search not completed
+ Partial Order Reduction
Full statespace search for:
never claim - (none specified)
assertion violations +
acceptance cycles - (not selected)
invalid end states +
State-vector 8172 byte, depth reached 0, errors: 1
0 states, stored
0 states, matched
0 transitions (= stored+matched)
0 atomic steps
hash conflicts: 0 (resolved)
I then ran SPIN again to try to determine the cause of the problem by examining the trail file. I used this command:
spin -t -v -p untitled.pml
This was the result:
using statement merging
spin: trail ends after -4 steps
#processes: 1
( global variable dump omitted )
-4: proc 0 (:init::1) untitled.pml:173 (state 1)
1 process created
According to this output (as I understand it), the verification is failing during the "init" procedure. The relevant code from within untitled.pml is this:
init {
int count = 0;
int ordinal = N;
do // This is line 173
:: (count < 2 * N + 1) ->
At this point I have no idea what is causing the problem since to me, the "do" statement should execute just fine.
Can anyone please help me in understanding SPINs output so I can remove this error during the verification process? The model does produce the correct output for reference.
You can simply ignore the trail file in this case, it is not relevant at all.
The error message
pan:1: VECTORSZ is too small, edit pan.h (at depth 0)
tells you that the size of directive VECTORSZ is too small to successfully verify your model.
By default, VECTORSZ has size 1024.
To fix this issue, try compiling your verifier with a larger VECTORSZ size:
spin -a untitled.pml
gcc -DVECTORSZ=2048 -o run pan.c
./run
If 2048 doesn't work too, try some more (increasingly larger) values.
I need guidence on implementing WHILE loop with Kettle/PDI. The scenario is
(1) I have some (may be thousand or thousands of thousand) data in a table, to be validated with a remote server.
(2) Read them and loopup to the remote server; I use Modified Java Script for this as remote server lookup validation is defined in external Java JAR file (I can use "Change number of copies to start... option on Modified java script and set to 5 or 10)
(3) Update the result on database table. There will be 50 to 60% connection failure cases each session.
(4) Repeat Step 1 to step 3 till all gets updated to success
(5) Stop looping on Nth cycle; this is to avoid very long or infinite looping, N value may be 5 or 10.
How to design such a WhILE loop in Pentaho Kettle?
Have you seen this link? It gives a pretty well detailed explanation of how to implement a while loop.
You need a parent job with a sub-transformation for doing a check on the condition which will return a variable to the job on whether to abort or to continue.
I have a C++ code in which I am using sql loader using system(). When SQL Loader executes while running the code, I got below mentioned messages which I want to disable:
SQL*Loader: Release 10.2.0.1.0 - Production on Thu Mar 14 14:11:25 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Commit point reached - logical record count 20
Commit point reached - logical record count 40
Commit point reached - logical record count 60
Commit point reached - logical record count 80
Remember that the system function uses the shell to execute the command. So you can use normal shell redirection:
system("/some/program > /dev/null");
You can use the silent=ALL option to suppress these messages:
system("/orahomepath/bin/sqlldr silent=ALL ...")
See also SQL*Loader Command-Line Reference:
As SQL*Loader executes, you also see feedback messages on the screen, for example:
Commit point reached - logical record count 20
You can suppress these messages by specifying SILENT with one or more values:
...
ALL - Implements all of the suppression values: HEADER, FEEDBACK, ERRORS, DISCARDS, and PARTITIONS.
Depending on the sql*ldr implementation, you might still end up with one or the other output - if you need complete silence, see the answer from #Joachim below.
So quite simply, the question is how to get the system boot up time in windows with c/c++.
Searching for this hasn't got me any answer, I have only found a really hacky approach which is reading a file timestamp ( needless to say, I abandoned reading that halfway ).
Another approach that I found was actually reading windows diagnostics logged events? Supposedly that has last boot up time.
Does anyone know how to do this (with hopefully not too many ugly hacks)?
GetTickCount64 "retrieves the number of milliseconds that have elapsed since the system was started."
Once you know how long the system has been running, it is simply a matter of subtracting this duration from the current time to determine when it was booted. For example, using the C++11 chrono library (supported by Visual C++ 2012):
auto uptime = std::chrono::milliseconds(GetTickCount64());
auto boot_time = std::chrono::system_clock::now() - uptime;
You can also use WMI to get the precise time of boot. WMI is not for the faint of heart, but it will get you what you are looking for.
The information in question is on the Win32_OperatingSystem object under the LastBootUpTime property. You can examine other properties using WMI Tools.
Edit:
You can also get this information from the command line if you prefer.
wmic OS Get LastBootUpTime
As an example in C# it would look like the following (Using C++ it is rather verbose):
static void Main(string[] args)
{
// Create a query for OS objects
SelectQuery query = new SelectQuery("Win32_OperatingSystem", "Status=\"OK\"");
// Initialize an object searcher with this query
ManagementObjectSearcher searcher = new ManagementObjectSearcher(query);
string dtString;
// Get the resulting collection and loop through it
foreach (ManagementObject envVar in searcher.Get())
dtString = envVar["LastBootUpTime"].ToString();
}
The "System Up Time" performance counter on the "System" object is another source. It's available programmatically using the PDH Helper methods. It is, however, not robust to sleep/hibernate cycles so is probably not much better than GetTickCount()/GetTickCount64().
Reading the counter returns a 64-bit FILETIME value, the number of 100-NS ticks since the Windows Epoch (1601-01-01 00:00:00 UTC). You can also see the value the counter returns by reading the WMI table exposing the raw values used to compute this. (Read programmatically using COM, or grab the command line from wmic:)
wmic path Win32_PerfRawData_PerfOS_System get systemuptime
That query produces 132558992761256000 for me, corresponding to Saturday, January 23, 2021 6:14:36 PM UTC.
You can use the PerfFormattedData equivalent to get a floating point number of seconds, or read that from the command line in wmic or query the counter in PowerShell:
Get-Counter -Counter '\system\system up time'
This returns an uptime of 427.0152 seconds.
I also implemented each of the other 3 answers and have some observations that may help those trying to choose a method.
Using GetTickCount64 and subtracting from current time
The fastest method, clocking in at 0.112 ms.
Does not produce a unique/consistent value at the 100-ns resolution of its arguments, as it is dependent on clock ticks. Returned values are all within 1/64 of a second of each other.
Requires Vista or newer. XP's 32-bit counter rolls over at ~49 days and can't be used for this approach, if your application/library must support older Windows versions
Using WMI query of the LastBootUpTime field of Win32_OperatingSystem
Took 84 ms using COM, 202ms using wmic command line.
Produces a consistent value as a CIM_DATETIME string
WMI class requires Vista or newer.
Reading Event Log
The slowest method, taking 229 ms
Produces a consistent value in units of seconds (Unix time)
Works on Windows 2000 or newer.
As pointed out by Jonathan Gilbert in the comments, is not guaranteed to produce a result.
The methods also produced different timestamps:
UpTime: 1558758098843 = 2019-05-25 04:21:38 UTC (sometimes :37)
WMI: 20190524222528.665400-420 = 2019-05-25 05:25:28 UTC
Event Log: 1558693023 = 2019-05-24 10:17:03 UTC
Conclusion:
The Event Log method is compatible with older Windows versions, produces a consistent timestamp in unix time that's unaffected by sleep/hibernate cycles, but is also the slowest. Given that this is unlikely to be run in a loop it's this may be an acceptable performance impact. However, using this approach still requires handling the situation where the Event log reaches capacity and deletes older messages, potentially using one of the other options as a backup.
C++ Boost used to use WMI LastBootUpTime but switched, in version 1.54, to checking the system event log, and apparently for a good reason:
ABI breaking: Changed bootstamp function in Windows to use EventLog service start time as system bootup time. Previously used LastBootupTime from WMI was unstable with time synchronization and hibernation and unusable in practice. If you really need to obtain pre Boost 1.54 behaviour define BOOST_INTERPROCESS_BOOTSTAMP_IS_LASTBOOTUPTIME from command line or detail/workaround.hpp.
Check out boost/interprocess/detail/win32_api.hpp, around line 2201, the implementation of the function inline bool get_last_bootup_time(std::string &stamp) for an example. (I'm looking at version 1.60, if you want to match line numbers.)
Just in case Boost ever dies somehow and my pointing you to Boost doesn't help (yeah right), the function you'll want is mainly ReadEventLogA and the event ID to look for ("Event Log Started" according to Boost comments) is apparently 6005.
I haven't played with this much, but I personally think the best way is probably going to be to query the start time of the "System" process. On Windows, the kernel allocates a process on startup for its own purposes (surprisingly, a quick Google search doesn't easily uncover what its actual purposes are, though I'm sure the information is out there). This process is called simply "System" in the Task Manager, and always has PID 4 on current Windows versions (apparently NT 4 and Windows 2000 may have used PID 8 for it). This process never exits as long as the system is running, and in my testing behaves like a full-fledged process as far as its metadata is concerned. From my testing, it looks like even non-elevated users can open a handle to PID 4, requesting PROCESS_QUERY_LIMITED_INFORMATION, and the resulting handle can be used with GetProcessTimes, which will fill in the lpCreationTime with the UTC timestamp of the time the process started. As far as I can tell, there isn't any meaningful way in which Windows is running before the System process is running, so this timestamp is pretty much exactly when Windows started up.
#include <iostream>
#include <iomanip>
#include <windows.h>
using namespace std;
int main()
{
unique_ptr<remove_pointer<HANDLE>::type, decltype(&::CloseHandle)> hProcess(
::OpenProcess(
PROCESS_QUERY_LIMITED_INFORMATION,
FALSE, // bInheritHandle
4), // dwProcessId
::CloseHandle);
FILETIME creationTimeStamp, exitTimeStamp, kernelTimeUsed, userTimeUsed;
FILETIME creationTimeStampLocal;
SYSTEMTIME creationTimeStampSystem;
if (::GetProcessTimes(hProcess.get(), &creationTimeStamp, &exitTimeStamp, &kernelTimeUsed, &userTimeUsed)
&& ::FileTimeToLocalFileTime(&creationTimeStamp, &creationTimeStampLocal)
&& ::FileTimeToSystemTime(&creationTimeStampLocal, &creationTimeStampSystem))
{
__int64 ticks =
((__int64)creationTimeStampLocal.dwHighDateTime) << 32 |
creationTimeStampLocal.dwLowDateTime;
wios saved(NULL);
saved.copyfmt(wcout);
wcout << setfill(L'0')
<< setw(4)
<< creationTimeStampSystem.wYear << L'-'
<< setw(2)
<< creationTimeStampSystem.wMonth << L'-'
<< creationTimeStampSystem.wDay
<< L' '
<< creationTimeStampSystem.wHour << L':'
<< creationTimeStampSystem.wMinute << L':'
<< creationTimeStampSystem.wSecond << L'.'
<< setw(7)
<< (ticks % 10000000)
<< endl;
wcout.copyfmt(saved);
}
}
Comparison for my current boot:
system_clock::now() - milliseconds(GetTickCount64()):
2020-07-18 17:36:41.3284297
2020-07-18 17:36:41.3209437
2020-07-18 17:36:41.3134106
2020-07-18 17:36:41.3225148
2020-07-18 17:36:41.3145312
(result varies from call to call because system_clock::now() and ::GetTickCount64() don't run at exactly the same time and don't have the same precision)
wmic OS Get LastBootUpTime
2020-07-18 17:36:41.512344
Event Log
No result because the event log entry doesn't exist at this time on my system (earliest event is from July 23)
GetProcessTimes on PID 4:
2020-07-18 17:36:48.0424863
It's a few seconds different from the other methods, but I can't think of any way that it is wrong per se, because, if the System process wasn't running yet, was the system actually booted?