TouchPanelSetCalibration not updating calibration - c++

Problem
Our product is providing a wizard to calibrate the touch screen. A special requirement is that I need to verify every new calibration which is made by this wizard. The verification is quite simple tho. After the touch screen has been calibrated a new screen containing 4 touch targets (buttons) is shown, if the user is able to hit each target within a given time frame the calibration is considered successful. If time runs out, the calibration data in the registry shall be restored and the touch driver shall be restored without restarting.
Approach
Backup of HKEY_LOCAL_MACHINE\HARDWARE\DEVICEMAP\TOUCH\CalibrationData
Show Windows CE built-in calibration UI using: TouchCalibrate()
Show custom verification screen as described above.
If verification failed restore registry and call TouchPanelSetCalibration(...) using old calibration data.
When calling TouchPanelSetCalibration(...) I get the following output:
Maximum Allowed Error 54:
Calibration Results:
Screen => Mapped
( 240, 136) => ( 240, 130)
( 96, 54) => ( 93, 57)
( 96, 218) => ( 99, 218)
( 384, 218) => ( 381, 220)
( 384, 54) => ( 387, 55)
Maximum error (square of Euclidean distance in screen units) = 36
The registry is properly restored and considering the output I'm assuming the calibration data is also properly forwarded to the driver.
But somehow the touch calibration is not restored without restarting the system.
Do I need to signal this change somehow by sending a message or firing an event? Do I need to make any additional API calls?
...Any help is appreciated
Thanks.
~Sambuca

I also posted this question on MSDN forums. Here's the answer I got there:
The Touch Driver Entrypoint TouchPanelSetCalibration must be called
by GWES to get the calibration data updated. When called from a user
application, the API would only update data held inside the
application process.
But there is an other approach to implement your touch calibration
wizard.
The Touch Calibration UI (calibrui) shown by TouchCalibrate() can be
customized. Basically, you'd need to replace the default confirmation
screen with your own implementation.
The instructions on how to clone the default CalibrUi can be found:
For Windows CE 5.0 in MSDN: http://msdn.microsoft.com/en-us/library/aa452834.aspx
For CE 6.0 and Compact 7: http://guruce.com/blogpost/cloning-calibrui-in-windows-ce-60

Related

Crashing when calling QTcpSocket::setSocketDescriptor()

my project using QTcpSocket and the function setSocketDescriptor(). The code is very normal
QTcpSocket *socket = new QTcpSocket();
socket->setSocketDescriptor(this->m_socketDescriptor);
This coding worked fine most of the time until I ran a performance testing on Windows Server 2016, the crash occurred. I debugging with the crash dump, here is the log
0000004f`ad1ff4e0 : ucrtbase!abort+0x4e
00000000`6ed19790 : Qt5Core!qt_logging_to_console+0x15a
000001b7`79015508 : Qt5Core!QMessageLogger::fatal+0x6d
0000004f`ad1ff0f0 : Qt5Core!QEventDispatcherWin32::installMessageHook+0xc0
00000000`00000000 : Qt5Core!QEventDispatcherWin32::createInternalHwnd+0xf3
000001b7`785b0000 : Qt5Core!QEventDispatcherWin32::registerSocketNotifier+0x13e
000001b7`7ad57580 : Qt5Core!QSocketNotifier::QSocketNotifier+0xf9
00000000`00000001 : Qt5Network!QLocalSocket::socketDescriptor+0x4cf7
00000000`00000000 : Qt5Network!QAbstractSocket::setSocketDescriptor+0x256
In the stderr log, I see those logs
CreateWindow() for QEventDispatcherWin32 internal window failed (Not enough storage is available to process this command.)
Qt: INTERNAL ERROR: failed to install GetMessage hook: 8, Not enough storage is available to process this command.
Here is the function, where the code was stopped on the Qt codebase
void QEventDispatcherWin32::installMessageHook()
{
Q_D(QEventDispatcherWin32);
if (d->getMessageHook)
return;
// setup GetMessage hook needed to drive our posted events
d->getMessageHook = SetWindowsHookEx(WH_GETMESSAGE, (HOOKPROC) qt_GetMessageHook, NULL, GetCurrentThreadId());
if (Q_UNLIKELY(!d->getMessageHook)) {
int errorCode = GetLastError();
qFatal("Qt: INTERNAL ERROR: failed to install GetMessage hook: %d, %s",
errorCode, qPrintable(qt_error_string(errorCode)));
}
}
I did research and the error Not enough storage is available to process this command. maybe the OS (Windows) does not have enough resources to process this function (SetWindowsHookEx) and failed to create a hook, and then Qt fire a fatal signal, finally my app is killed.
I tested this on Windows Server 2019, the app is working fine, no crashes appear.
I just want to know more about the meaning of the error message (stderr) cause I don't really know what is "Not enough storage"? I think it is maybe the limit or bug of the Windows Server 2016? If yes, is there any way to overcome this issue on Windows Server 2016?
The error ‘Not enough storage is available to process this command’ usually occurs in Windows servers when the registry value is set incorrectly or after a recent reset or reinstallations, the configurations are not set correctly.
Below is verified procedure for this issue:
Click on Start > Run > regedit & press Enter
Find this key name HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters
Locate IRPStackSize
If this value does not exist Right Click on Parameters key and Click on New > Dword Value and type in IRPStackSize under the name.
The name of the value must be exactly (combination of uppercase and lowercase letters) the same as what I have above.
Right Click on the IRPStackSize and click on Modify
Select Decimal enter a value higher than 15(Maximum Value is 50 decimal) and Click Ok
You can close the registry editor and restart your computer.
Reference
After researching for a few days I finally can configure the Windows Server 2016 setting (registry) to prevent the crash.
So basically it is a limitation of the OS itself, it is called desktop heap limitation.
https://learn.microsoft.com/en-us/troubleshoot/windows-server/performance/desktop-heap-limitation-out-of-memory
(The funny thing is the error message is Not enough storage is available to process this command but the real problem came to desktop heap limitation. )
So for the solution, flowing the steps in this link: https://learn.microsoft.com/en-us/troubleshoot/system-center/orchestrator/increase-maximum-number-concurrent-policy-instances
I increased the 3rd parameter of SharedSection to 2048 and it fix the issue.
Summary steps:
Desktop Heap for the non-interactive desktops is identified by the third parameter of the SharedSection= segment of the following registry value:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems\Windows
The default data for this registry value will look something like the following:
%SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=1024,3072,512 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16
The value to be entered into the Third Parameter of the SharedSection= segment should be based on the calculation of:
(number of desired concurrent policies) * 10 = (third parameter value)
Example: If it's desired to have 200 concurrent policy instances, then 200 * 10 = 2000, rounding up to a nice memory number gives you 2048as the third parameter resulting in the following update to be made to the registry value:
SharedSection=1024,3072,2048

Is there an API that will run on iOS in order to change the Frame Per Second of an existing video?

I am looking for a way to receive as an input any video (that is supported on iOS) and save on the device a new video with a new Frame Per Second rate. The motivation is to decrease the video size, and as well make it as lite weighted as possible.
Tried using ffmpeg library from command line (need it to run directly from application)
Tried working with SDAVAssetExportSessionDelegate, but managed only to change the bit per second (each frame quality is lower)
Though to work with OpenCV - but preferring something lighter and build in if possible
Objective C:
'''
compressionEncoder.videoSettings = #
{
AVVideoCodecKey: AVVideoCodecTypeH264,
AVVideoWidthKey: [NSNumber numberWithInt:width], //Set your resolution width here
AVVideoHeightKey: [NSNumber numberWithInt:height], //set your resolution height here
AVVideoCompressionPropertiesKey: #
{
AVVideoAverageBitRateKey: [NSNumber numberWithInt:bitRateKey], // Give bitrate for lower size low values
AVVideoProfileLevelKey: AVVideoProfileLevelH264High40,
// Does not change - quality setting and not reletaed to playback framerate!
//AVVideoMaxKeyFrameIntervalKey: #800,
},
};
compressionEncoder.audioSettings = #
{
AVFormatIDKey: #(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: #2,
AVSampleRateKey: #44100,
AVEncoderBitRateKey: #128000,
};
'''
Expected a video with less Frame Per Second, each frame is in the same quality. Similar to a brief thumbnail summary of the video
The type of conversion you are doing will be time and power consuming on a mobile device, but I am guessing you are already aware of that.
Given your end goal is to reduce size, while presumably maintaining a reasonable quality, you may find you want to experiment with different settings etc in the encodings.
For this type of video manipulation, ffmpeg is a good choice as you probably saw from your command line usage. To use ffmpeg from an application, a common approach is to use a well supported 'ffmpeg wrapper' - this effectively runs the Ffmpeg command line commands from wihtin your application.
The advantage is that all the usual syntax should work and you can leverage the vast amount of info on ffmpeg command line syntax on the web. The downsides are that ffmpeg was not not designed to be wrapped like this so you may see some issues, although with a well supported wrapper you should find either help or that others have already worked around the issues.
Some examples of popular iOS ffmpeg wrappers:
https://github.com/tanersener/mobile-ffmpeg
https://github.com/sunlubo/SwiftFFmpeg
Get MobileFFMpeg up and running:
https://stackoverflow.com/a/59325680/1466453
Once you can make MobileFFMpeg calls in your IOS code then changing frame rate is pretty straightforward with this code:
[MobileFFmpeg execute: #"-i -filter:v fps=fps=30 "];

How to enable Long Range for BLE Mesh in Zephyr OS

I’m working on Bluetooth mesh network solution and I have requirement to increase range.
I’m using nrf52840 DK and nrf52840 dongles, nrf5SDKforMeshv310.
In Nordic Devzone
I found solution which, enables BLE long range mode in NRF SDK for mesh.
NOTE! I'm aware solution doesn't comply with Bluetooth Mesh standard.
Following changes were applied to nrf5 SDK for Mesh v310:
In advertise.c, set_default_broadcast_configuration() changed radio_mode to use RADIO_MODE_NRF_62K5BIT instead of RADIO_MODE_BLE_1MBIT.
In scanner.c, scanner_config_reset() changed scanner_config_radio_mode_set() to use RADIO_MODE_NRF_62K5BIT instead of RADIO_MODE_BLE_1MBIT.
In radio_config.c, radio_config_config() added the following code at the end:
if (p_config->radio_mode==RADIO_MODE_NRF_62K5BIT ){
NRF_RADIO->PCNF0 |=(
((RADIO_PCNF0_PLEN_LongRange << RADIO_PCNF0_PLEN_Pos) & RADIO_PCNF0_PLEN_Msk) |
((2 << RADIO_PCNF0_CILEN_Pos) & RADIO_PCNF0_CILEN_Msk) |
((3 << RADIO_PCNF0_TERMLEN_Pos) & RADIO_PCNF0_TERMLEN_Msk) );
}
In broadcast.c, time_required_to_send_us() added:
if (radio_mode == RADIO_MODE_NRF_62K5BIT)
{
packet_length_in_bytes +=RADIO_PREAMBLE_LENGTH_LR_EXTRA_BYTES;
}
Defined RADIO_PREAMBLE_LENGTH_LR_EXTRA_BYTES = 9 in same file
Changed 5th element in radio_mode_to_us_per_byte[] from 128 to 64.
NOTE. that the long-range mode is mislabeled. It is called RADIO_MODE_NRF_62K5BIT in the header file but corresponds to the 125kbps BLE long range mode instead.
Unfortunately for relays I’m pushed to use Zephyr to support friend feature and Zephyr is not relaying messages after applying changes to NRF SDK. I did brief investigation on Zephyr side and found that code bits for BLE long range described above for NRF SDK are in place and can be enabled using following Kconfig settings:
CONFIG_BT_AUTO_PHY_UPDATE=y
CONFIG_BT_PHY_UPDATE=y
CONFIG_BT_HCI_MESH_EXT=y
CONFIG_BT_CTLR_PHY=y
CONFIG_BT_CTLR_ADV_EXT=y
CONFIG_BT_CTLR_ADVANCED_FEATURES=y
CONFIG_BT_CTLR_PHY_2M=y
CONFIG_BT_CTLR_PHY_CODED=y
But still I don’t see that messages are being relayed on Zephyr side (using J-Link RTT Viewer). I also tried to increase log level for Bluetooth and Mesh to DEBUG, but I don’t see any signs that messages are malformed or rejected.
May be someone has ideas in which direction I should dig in on Zephyr side?

ParaView: Live point cloud visualization plugin

I am writing a ParaView version 5.1.2 plugin in C++ to visualize point cloud data produced by a LiDAR sensor. I noticed that Velodyne has an open source ParaView custom application to visualize their LiDAR data called Veloview. I tweaked some of their code to start but I am stuck now.
So far I wrote a reader that takes a pcap file and renders a point cloud that can be played back frame by frame. I also wrote a ParaView source that listens on a port and captures udp packets and after they are captured uses the reader to split them into frames and visualize the PC.
Now I would like to take live udp packets and render the point cloud in real time as each frame is completed.
I am having trouble accomplishing this because of the ParaView plugin structure. Currently, my reader displays a frame when the method RequestData is called. My method looks something like this.
int RequestData(vtkInformation *request, vtkInformationVector **inputVector, vtkInformationVector *outputVector){
vtkPolyData* output = vtkPolyData::GetData(outputVector);
vtkInformation* info = outputVector->GetInformationObject(0);
int timestep = 0;
if (info->Has(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP()))
{
double timeRequest = info->Get(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP());
int length = info->Length(vtkStreamingDemandDrivenPipeline::TIME_STEPS());
timestep = static_cast<int>(floor(timeRequest + 0.5));
}
this->Open();
// GetFrame returns a vtkSmartPointer<vtkPolyData> that is the frame
output->ShallowCopy(this->GetFrame(timestep));
this->Close();
return 1;
}
The RequestData method is called every time the timestep is updated in the ParaView gui. Then the frame from that timestep is copied into the outputVector.
I am not sure how to implement this with live data because in that circumstance the RequestData method is not called because no timesteps are requested. I saw there is a way to keep RequestData executing by using CONTINUE_EXECUTING() in this way.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1);
But I do not know if that is supposed to be used to visualize live data.
For now I am interested in simply reading live packets and throwing them away as soon as their frame is rendered. Does anyone know how I can achieve this?
In the code of VeloView (which basically is a bundled ParaView+LidarPlugin), the timesteps of ParaView is changed by the main code, not the Lidar Plugin.
We advice you to start from VeloView code, which is much closer to your goal.
If you really want to start from scratch within ParaView, you need to increment this requested timestep yourself.
Newest version of VeloView (unreleased) uses the same mechanism as ParaView “LiveSource” plugin (available in 5.6+), where the plugin tells ParaView to set a QtTimer that will automatically increment the available and requested timesteps.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1); relates to another mechanism that will run request Data multiple time, but won’t take care of updating the requested timestep.
Best,
Bastien Jacquet
VeloView project leader

Session 0 capture screen

Tried https://stackoverflow.com/a/30138664/533237 and able to capture screen.
But I want to capture screen from an application running in session 0 or another user.Introduced a 10 sec sleep before capturing and switched to another user.
Also tried PsExec.exe -h -s E:\sc.exe. Both throws error
C:\Users\unity\Documents\Visual Studio 2015\Projects\ConsoleApplication2\Debug>sc.exe
FAILURE 0x8876086C (-2005530516)
line: 60 file: 'c:\users\unity\documents\visual studio 2015\projects\consoleapplication2\consoleapplication2\consoleapplication2.cpp'
expr: 'd3d->GetAdapterDisplayMode(adapter, &mode)'
C:\Users\unity\Documents\Visual Studio 2015\Projects\ConsoleApplication2\Debug>PsExec.exe -h -s E:\sc.exe -w E:\
PsExec v2.11 - Execute processes remotely
Copyright (C) 2001-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
FAILURE 0x8876086C (-2005530516)
line: 60 file: 'c:\users\unity\documents\visual studio 2015\projects\consoleapplication2\consoleapplication2\consoleapplication2.cpp'
expr: 'd3d->GetAdapterDisplayMode(adapter, &mode)'
Commented out GetAdapterDisplayMode and hardcoded height and width but CreateDevice failed
FAILURE 0x8876086A (-2005530518)
line: 76 file: 'c:\users\unity\documents\visual studio 2015\projects\consoleapplication2\consoleapplication2\consoleapplication2.cpp'
expr: 'd3d->CreateDevice(adapter, D3DDEVTYPE_HAL, NULL, D3DCREATE_SOFTWARE_VERTEXPROCESSING, &parameters, &device)'
Edited:
Idea is to have a single app running in background and capture anything getting displayed irrespective of the user logged in or even if no one is logged in (lock/login screen)
There are two levels of problems with this.
On one level, while a lot of GDI will work, session 0 is not linked to a functional display device, certainly not one that is capable of D3D.
On another level, while things like the DWM have been introduced, the Windows API has always presented a display model where invisible screen pixels simply don't exist. The entire windows display model is built around getting windows to co-operative paint to a shared display surface, and any parts of a window that are uncovered are repainted on demand by the desktop composition system.
This means, in a very fundamental way, you cannot screen capture anything from session 0 as, in order to do so, session 0 would have to be attached to the active display device.