MFC. Quickly highlight all matches in RichEditCtrl - c++

I have a very large text (>50mb).
FindText, SetSel and SetSelectionCharFormat is too slow for me.
I tried to formulate the text and then display but it was even slower.
Can I work with RichEditCtrl in memory and then just display?
Or can I speed up the first option or you can solve my problem in another way?

My measurements of improvement are different than yours.
Here is my code:
void CRichEditAppView::OnEditHighlight()
{
FINDTEXTEX ft = {};
ft.chrg = { 0, -1 };
ft.lpstrText = L"Lorem ipsum";
DWORD dwFlags(FR_DOWN);
CHARFORMAT2 cf = {};
cf.cbSize = sizeof cf;
cf.dwMask = CFM_BACKCOLOR;
cf.crBackColor = RGB(255, 255, 0);
CRichEditCtrl& ctrl = GetRichEditCtrl();
ctrl.HideSelection(TRUE, FALSE);
ctrl.SetRedraw(FALSE);
int count(0);
while (ctrl.FindTextW(dwFlags, &ft) >= 0)
{
ctrl.SetSel(ft.chrgText);
ctrl.SetSelectionCharFormat(cf);
ft.chrg.cpMin = ft.chrgText.cpMax + 1;
count++;
}
ctrl.HideSelection(FALSE, FALSE);
ctrl.SetRedraw(TRUE);
ctrl.Invalidate();
}
I have tested it on a file with 3,000 copies of "Lorem ipsum" text (file size 1,379 KB).
The "naive" implementation (without calls to HideSelection() and SetRedraw()) took 11 seconds.
Calling HideSelection() reduced the time to 9 seconds, adding SetRedraw() - to 1.2 seconds. So I expect to see a 10-times improvement.
Just to compare, if I remove a call to SetSelectionCharFormat(), I'm only saving 0.4 seconds.

Related

Why is the xTimerPeriodInTicks affecting the code's runtime?

So for a report I'm trying to time the number of cycles it takes to execute a couple seperate functions in a program running on an ESP32, using freeRTOS. The code looks something like this with init_input_generator() being the first function called, here i'm timing onesecwindow.fifo_iir_filter_datapoint():
float geninputarray[SAMPLING_RATE]; //stores the values of an artificial signal we can use to test the algorithm
fifo_buffer onesecwindow = fifo_buffer(SAMPLING_RATE);
esp_err_t init_input_generator() {
dsps_tone_gen_f32(geninputarray, SAMPLING_RATE, 4, 0.04, 0); //10Hz
TimerHandle_t input_signal_timer =
xTimerCreate("input_timer", pdMS_TO_TICKS(10), pdTRUE, (void *)0, input_generator_callback);
xTimerStart(input_signal_timer, 0);
return ESP_OK;
}
void input_generator_callback(TimerHandle_t xTimer)
{
xTaskCreatePinnedToCore(
input_counter,
"input_generator",
4096,
NULL,
tskIDLE_PRIORITY + 11,
NULL,
PRO_CPU_NUM);
}
void input_counter(TimerHandle_t xTimer)
{
stepcounter = (stepcounter + 1) % SAMPLING_RATE;
insert_new_data(geninputarray[stepcounter]);
vTaskDelete(nullptr); // to gracefully end the task as returning is not allowed
}
esp_err_t insert_new_data(float datapoint)
{
onesecwindow.fifo_write(datapoint); // writes the new datapoint into the onesec window
unsigned int start_time = dsp_get_cpu_cycle_count();
onesecwindow.fifo_iir_filter_datapoint();
unsigned int end_time = dsp_get_cpu_cycle_count();
printf("%i\n", (end_time-start_time));
}
The thing that i'm noticing is that whenever I change the xTimerPeriodInTicks to a larger number, I get significantly longer time results, which just really confuses me and get's in the way of proper timing. Although the function doesn't scale with the SAMPLING_RATE and thus should give quite consistent results for each loop, output in cycles with a 10ms timer period looks something like this (every 5 or 6 "loops" for some reason it's longer):
1567, 630, 607, 624, 591, 619, 649, 1606, 607
with a 40ms timer period I get this as a typical output:
1904, 600, 1894, 616, 1928, 1928, 607, 1897, 628
As such I'm confused by the output, as they use the same freeRTOS taskcreate to run with the same priority on the same core, I don't see why there would be any difference. Perhaps I'm misunderstanding something basic/fundamental here so any help would be greatly appreciated.
Update
So based on the comments by #Tarmo i have restructured to approach to having a recurring task, unfortunately the output still seems to suffer from the same problem. The code now looks like this:
#include <esp_dsp.h> //Official ESP-DSP library
float geninputarray[SAMPLING_RATE]; //stores the values of an artificial signal we can use to test the algorithm
fifo_buffer onesecwindow = fifo_buffer(SAMPLING_RATE);
esp_err_t init_input_generator() {
dsps_tone_gen_f32(geninputarray, SAMPLING_RATE, 4, 0.04, 0); //10Hz
xTaskCreatePinnedToCore(
input_counter, // the actual function to be called
"input_generator",
4096,
NULL,
tskIDLE_PRIORITY + 5,
NULL,
APP_CPU_NUM);
return ESP_OK;
}
void input_counter(TimerHandle_t xTimer)
{
while (true)
{
stepcounter = (stepcounter + 1) % SAMPLING_RATE;
insert_new_data(geninputarray[stepcounter]);
vTaskDelay(4);
}
}
esp_err_t insert_new_data(float datapoint)
{
onesecwindow.fifo_write(datapoint); // writes the new datapoint into the onesec window
unsigned int start_time = dsp_get_cpu_cycle_count();
onesecwindow.fifo_iir_filter_datapoint();
unsigned int end_time = dsp_get_cpu_cycle_count();
printf("%i\n", (end_time-start_time));
}

C++ call functions internally

I'm working with following code which gives access to low level monitor configuration using Windows APIs
https://github.com/scottaxcell/winddcutil/blob/main/winddcutil/winddcutil.cpp
And I would like to create a new function that increases or decreases the brightness, I was able to do this using Powershell but since the C++ code looks somewhat easy to understand I want to have a crack at it and try my luck and hopefully integrate it with an ambient light sensor later.
The powershell code I have is as follows which works with above executable: (its very crude at this stage)
$cb = [int]([uint32]("0x" + ((C:\Users\Nick\WindowsScripts\winddcutil-main\x64\Release\winddcutil.exe getvcp 0 10) -join "`n").split(" ")[2]))
if ($args[0] -eq "increase") {
if ( $cb -ne 100) {
$nb = "{0:x}" -f ($cb + 10)
C:\Users\Nick\WindowsScripts\winddcutil-main\x64\Release\winddcutil.exe setvcp 0 10 $nb
}
} elseif ($args[0] -eq "decrease") {
if ( $cb -ne 10) {
$nb = "{0:x}" -f ($cb - 10)
C:\Users\Nick\WindowsScripts\winddcutil-main\x64\Release\winddcutil.exe setvcp 0 10 $nb
}
}
It gets current brightness and if argument given is "increase" and if brightness is not already 100 then adds 10, in case of "decrease" it subtracts 10. Values are coveted to and from hex to decimals.
I understand if I want to integrate this inside the C++ code directly I would have something like following:
int increaseBrightness(std::vector<std::string> args) {
size_t did = INT_MAX;
did = std::stoi(args[0]);
//0 is monitor ID and 10 is the feature code for brightness
//currentBrightness = getVcp("0 10")
//calculate new value
//setVcp("0 10 NewValue")
}
Ultimetaly I would like to call the executable like "winddcutil.exe increasebrightness 0" (0 being the display ID)
I can keep digging around on how to do the calculation in C++ but internally calling the functions and passing the arguments so far turned out to be very challenging for me and I would appreciate some help there.
you need to add a needed option here
line 164
std::unordered_map<std::string,std::function<int(std::vector<std::string>)>> commands
{
{ "help", printUsage },
{ "detect", detect},
{ "capabilities", capabilities },
{ "getvcp", getVcp },
{ "setvcp", setVcp},
{"increasebrightness ", increaseBrightness } // update here
};
to get current brightness you can't use getVcp api due to its result will be printed to stdout , it isn't returned via returned value, follow getVcp to get brighness value , use this
DWORD currentValue;
bool success = GetVCPFeatureAndVCPFeatureReply(physicalMonitorHandle, vcpCode, NULL, &currentValue, NULL);
if (!success) {
std::cerr << "Failed to get the vcp code value" << std::endl;
return success;
}
then
define your increaseBrightness like
int increaseBrightness(std::vector<std::string> args) {
size_t did = INT_MAX;
did = std::stoi(args[0]);
DWORD currentBrightness;
bool success = GetVCPFeatureAndVCPFeatureReply(
physicalMonitorHandle, vcpCode, NULL, &currentBrightness, NULL);
if (!success) {
std::cerr << "Failed to get the vcp code value" << std::endl;
return success;
}
//example + 10
auto newValue = did + 10;
success = setVcp({"0", "10", std::to_string(newValue)});
if(success)
{
// your handler
}
// 0 is monitor ID and 10 is the feature code for brightness
// currentBrightness = getVcp("0 10")
// calculate new value
// setVcp("0 10 NewValue")
}
test for passing argument:
https://godbolt.org/z/5n5Gq3d7e
note: make sure your have increaseBrightness's declaration before std::unordered_map<std::string,std::function<int(std::vector<std::string>)>> commands to avoid compiler's complaint

Building a Timeline from Lossy Time Stamps

I'm working with a product from Velodyne called the VLP-16 (the manual is available from their website with details) and I'm trying to build a timeline of the data it sends. The data it sends comes through a UDP transmission (UDP packets may appear out of order) and each packet is time-stamped with a 32-bit microsecond value. The microsecond value is synced with UTC time. This means that the timestamp will wrap around back to zero after each hour in UTC time. Since UDP packets may technically appear out of order, it is difficult to know what hour a packet may belong to.
Here's a snippet of code that generally describes the problem at hand:
struct LidarPacket
{
uint32_t microsecond;
/* other data */
};
struct LidarTimelineEntry
{
uint32_t hour;
LidarPacket packet;
};
using LidarTimeline = std::vector<LidarTimelineEntry>;
void InsertAndSort(LidarTimeline& timeline, uint32_t hour, const LidarPacket&);
void OnLidarPacket(LidarTimeline &timeline, LidarPacket& newestPacket)
{
/* Where to insert 'newestPacket'? */
}
The simplest approach would be to assume that the packets come in order.
void OnLidarPacket(LidarTimeline &timeline, LidarPacket& newestPacket)
{
if (timeline.empty()) {
timeline.emplace_back(LidarTimelineEntry{0, newestPacket});
return;
}
auto &lastEntry = timeline.back();
if (newestPacket.microsecond < lastEntry.packet.microsecond) {
InsertAndSort(timeline, lastEntry.hour + 1, newestPacket);
} else {
InsertAndSort(timeline, lastEntry.hour, newestPacket);
}
}
This approach will fail if even one packet is out of order though. A slightly more robust way is to also check to see if the wrap occurs near the end of the hour.
bool NearEndOfHour(const LidarPacket& lidarPacket)
{
const uint32_t packetDuration = 1344; // <- approximate duration of one packet.
const uint32_t oneHour = 3600000000; // <- one hour in microseconds
return (lidarPacket.microsecond < packetDuration) || (lidarPacket.microsecond > (oneHour - packetDuration));
}
void OnLidarPacket(LidarTimeline &timeline, LidarPacket& newestPacket)
{
if (timeline.empty()) {
timeline.emplace_back(LidarTimelineEntry{0, newestPacket});
return;
}
auto &lastEntry = timeline.back();
if ((newestPacket.microsecond < lastEntry.packet.microsecond) && NearEndOfHour(lastEntry.packet)) {
InsertAndSort(timeline, lastEntry.hour + 1, newestPacket);
} else {
InsertAndSort(timeline, lastEntry.hour, newestPacket);
}
}
But it's difficult to tell if this is really going to cut it. What is the best way to build a multi-hour timeline from microsecond-stamped data coming from a UDP stream?
Doesn't have to be answered in C++
The approach I ended up going with is the following:
If the timeline is empty, add the time point to the timeline with hour=0
Otherwise, create three possible time points. One using the hour of the last time point, one with an hour before the hour of the last time point, and the last an hour after the last time point.
Compute the absolute differences of all these time points with the last time point (using a 64-bit integer).
Select the hour which yields the smallest absolute difference with the last time point.
This approach passed the following tests:
uint32_t inputTimes[] = { 0, 750, 250 };
// gets sorted to:
uint32_t outputTimes[] = { 0, 250, 750 };
int32_t outputHours[] = { 0, 0, 0 };
uint32_t inputTimes[] = { 1500, oneHour - 500, 250 };
// gets sorted to:
uint32_t outputTimes[] = { oneHour - 500, 250, 1500 };
int32_t outputHours[] = { -1, 0, 0 };
uint32_t inputTimes[] = { oneHour - 500, 1500, 250 };
// gets sorted to:
uint32_t outputTimes[] = { oneHour - 500, 250, 1500 };
int32_t outputHours[] = { 0, 1, 1 };

Clamp framerate in Windows

I have a simple loop
LARGE_INTEGER ticks_per_second;
::QueryPerformanceFrequency(&ticks_per_second);
MSG msg = { 0 };
while (true)
{
if (msg.message == WM_QUIT)
exit(0);
if (::PeekMessageW(&msg, NULL, 0U, 0U, PM_REMOVE))
{
::TranslateMessage(&msg);
::DispatchMessageW(&msg);
continue;
}
static double last_time_s = 0;
LARGE_INTEGER cur_time_li;
::QueryPerformanceCounter(&cur_time_li);
double cur_time_s = (double)cur_time_li.QuadPart / (double)ticks_per_second.QuadPart;
double diff_s = cur_time_s - last_time_s;
double rate_s = 1 / 30.0f;
uint32_t slept_ms = 0;
if (diff_s < rate_s)
{
slept_ms = (uint32_t)((rate_s - diff_s) * 1000.0);
::Sleep(slept_ms);
}
update();
::printf("updated %f %u\n", diff_s, slept_ms);
last_time_s = cur_time_s;
}
And want update() to be called 30 times per second, but not more often
With this code it goes wrong, in console I getting something like this:
updated 0.031747 1
updated 0.001997 31
updated 0.031912 1
updated 0.001931 31
updated 0.031442 1
updated 0.002084 31
Which is seems to be correct only for first update, second one called too fast, and I can't understand why
I understand that update, PeekMessageW and etc. also wasting time, but even if I create a while (true) loop and comment update() out, it's still printing similar result
I using DirectX 11 with vsync turned off for rendering (rendering inside update function):
g_pSwapChain->Present(0, 0);
How do I fix code to make update() stable called 30 times in one second?
I don't think casting to double is good idea.I would run something like this:
static LARGE_INTEGER last_time_s = { 0 };
::QueryPerformanceCounter(&cur_time_li);
time_diff_microsec.QuadPart = cur_time_li.QuadPart - last_time_s.QuadPart;
// To avoid precision lost, convert to seconds *before* dividing by ticks-per-second.
time_diff_microsec.QuadPart *= 1000000;
time_diff_microsec.QuadPart /= ticks_per_second.QuadPart;
double rate_s = 1 / 30.0f;
uint32_t slept_ms = 0;
if (time_diff_microsec.QuadPart >= rate_s)// if (diff_s < rate_s)
{
// slept_ms = (uint32_t)(rate_s - time_diff_microsec.LowPart);// *1000.0);
// ::Sleep(slept_ms);
//}
//update();
::printf("updated %lld %u\n", time_diff_microsec.QuadPart, slept_ms);
}
last_time_s.QuadPart = time_diff_microsec.QuadPart/ 1000000;
}
Just brief "sketch". Not verified that calculations are correct though.

SDL1 -> SDL2 resolution list building with a custom screen mode class

My engine was recently converted to run with SDL2 for input, and video. However, I am having a hard time getting the resolution mode list building working correctly. I did study the Migration Guide for SDL2, as well as heavily research links -- I am not sure how to go about it, several failed attempts later.
So, for starters, I have a file, called i_video.cc, that handles SDL2 video code. 99% of the engine runs in OpenGL, so SDL is only there to initialize the window and set any variables.
So, here was the old way of grabbing resolution into our screen mode class. For reference, the screen mode class is defined in r_modes.cc.
With that said, here is the old SDL1-based code that builds the screen mode list, that I cannot, for the LIFE of me, get running under SDL2. This code starts at line 190 in i_video.cc. For reference, the old code:
// -DS- 2005/06/27 Detect SDL Resolutions
const SDL_VideoInfo *info = SDL_GetVideoInfo();
SDL_Rect **modes = SDL_ListModes(info->vfmt,
SDL_OPENGL | SDL_DOUBLEBUF | SDL_FULLSCREEN);
if (modes && modes != (SDL_Rect **)-1)
{
for (; *modes; modes++)
{
scrmode_c test_mode;
test_mode.width = (*modes)->w;
test_mode.height = (*modes)->h;
test_mode.depth = info->vfmt->BitsPerPixel; // HMMMM ???
test_mode.full = true;
if ((test_mode.width & 15) != 0)
continue;
if (test_mode.depth == 15 || test_mode.depth == 16 ||
test_mode.depth == 24 || test_mode.depth == 32)
{
R_AddResolution(&test_mode);
}
}
}
// -ACB- 2000/03/16 Test for possible windowed resolutions
for (int full = 0; full <= 1; full++)
{
for (int depth = 16; depth <= 32; depth = depth + 16)
{
for (int i = 0; possible_modes[i].w != -1; i++)
{
scrmode_c mode;
mode.width = possible_modes[i].w;
mode.height = possible_modes[i].h;
mode.depth = depth;
mode.full = full;
int got_depth = SDL_VideoModeOK(mode.width, mode.height,
mode.depth, SDL_OPENGL | SDL_DOUBLEBUF |
(mode.full ? SDL_FULLSCREEN : 0));
if (R_DepthIsEquivalent(got_depth, mode.depth))
{
R_AddResolution(&mode);
}
}
}
}
It is commented out, and you can see the SDL2 code above it that sets SDL_CreateWindow. The video is just fine in-game, but without resolution building, we cannot get screen-resolution changes without passing command-line arguments first before the program loads. I wish they left SOME kind of compatibility layer, because it seems SDL2 has a slight learning curve over the way I've always handled this under SDL1.
I know that ListModes and VideoInfo no longer exist, and I've tried replacing them with equivalent SDL2 functions, such as GetDisplayModes, but the code just doesn't work correctly. I am not sure how I'm supposed to do this, or if r_modes.cc just needs to be completely refactored, but all I need it to do is grab a list of video modes to populate my scrmode_c class (in r_modes.cc).
When I try to replace everything with SDL2, I get an invalid cast from SDL_Rect* to SDL_Rect**, so maybe I am just doing this all wrong. Several months I've spent trying to get it working, and it just doesn't want to. I don't care much about setting bits-per-pixel, as modern machines can just default to 24 now and we don't need any reason to set it to 16 or 8 anymore (nowadays, everyone has an OpenGL card that can go above 16-bit BPP) ;)
Any advice, help...anything at this point would be greatly appreciated =)
Thank you!
-Coraline
Use a combination of SDL_GetNumDisplayModes and SDL_GetDisplayMode, then push these back into a vector of SDL_DisplayMode.
std::vector<SDL_DisplayMode> mResolutions;
int display_count = SDL_GetNumVideoDisplays();
SDL_Log("Number of displays: %i", display_count);
for (int display_index = 0; display_index <= display_count; display_index++)
{
SDL_Log("Display %i:", display_index);
int modes_count = SDL_GetNumDisplayModes(display_index);
for (int mode_index = 0; mode_index <= modes_count; mode_index++)
{
SDL_DisplayMode mode = { SDL_PIXELFORMAT_UNKNOWN, 0, 0, 0, 0 };
if (SDL_GetDisplayMode(display_index, mode_index, &mode) == 0)
{
SDL_Log(" %i bpp\t%i x %i # %iHz",
SDL_BITSPERPIXEL(mode.format), mode.w, mode.h, mode.refresh_rate);
mResolutions.push_back(mode);
}
}
}