Calculating line offset(s) for rendering text (custom edit control) - c++

I followed Custom edit control win32 second answer to create custom edit control, but the problem is that, I render every letter separate, so I can have different text colors. And for every render call, I have to calculate text offset to start rendering from top of the screen (0,0), so i don't have to render whole text. And if I have a 200kB file and scroll to bottom of the file (and do some editing there), there is just too much lag, since I have to go trough all that text and find all '\n' (indicates line number) for offset.
Render function:
int Screen_X = 0, Screen_Y = 0;
size_t Text_YOffset = Calc_TextYPos(m_Screen_YOff); //Screen pos(0,0) to text
size_t Text_Size = m_Text.size();
COLORREF Text_ColorOld = 0;
for (size_t pos = Text_YOffset; pos < Text_Size; pos++) {
if (m_Text.at(pos) == L'\n') {
Screen_Y++; Screen_X = 0;
continue;
}
if (Screen_X < m_Screen_XOff) { Screen_X++; continue; }
if (m_Screen_MaxX < Screen_X) continue;
if (m_Screen_MaxY < Screen_Y) break;
if (m_Text_Color.at(pos) != Text_ColorOld) {
Text_ColorOld = m_Text_Color.at(pos);
if (SetTextColor(hDC, Text_ColorOld) == CLR_INVALID) {
MB_ERR("'SetTextColor' Failed!");
PostQuitMessage(-1);
}
}
CHECK_ERR(TextOut(hDC, (Screen_X - m_Screen_XOff) * m_Char_Width, Screen_Y * m_Char_Height, &m_Text.at(pos), 1), ERR_MSG_TEXT_OUT);
Screen_X++;
}
Calc_TextYPos:
size_t Edit_Control::Calc_TextYPos(int Screen_y) {
if (Screen_y == 0) return 0;
size_t Offset = 0;
size_t Text_Size = m_Text.size();
for (size_t pos = 0; pos < Text_Size; pos++) {
if (m_Text.at(pos) == L'\n' && Screen_y != 0) {
Screen_y--;
Offset = pos + 1;
}
if (Screen_y == 0) return Offset;
}
return Offset;
}
Am I taking the wrong path here and should use "different algorithm" for rendering text (in different colors), or if not, how can I optimize this code? I like this approach (for caret) since, selecting text is really easy.
I also came across What is the fastest way to draw formatted text in the Win32 API? but, it doesn't answer my question. It only mentions about rendering function (ExtTextOut), but I don't need that. I need a fast way of calculating line offset(s) on big strings.

Related

Drag and Drop Item list not working properly on ImGUI

Im using ImGUI and I want to implement a layer menu for the images and to move them im using
Drag to reorder items in a vector.
Sometimes it works just fine but others the images just jumps from the current position to a random one.
for (int i = 0; i < this->Images->size(); i++) {
ImGui::Image((void*)(intptr_t)this->Images->at(i).texture, ImVec2(100 * temp_percentage, 100 * temp_percentage));
ImGui::SameLine();
ImGui::Selectable(this->Images->at(i).name.c_str());
if (ImGui::IsItemActive() && !ImGui::IsItemHovered())
{
int n_next = i + (ImGui::GetMouseDragDelta(0).y < 0.f ? -1 : 1);
if (n_next >= 0 && n_next < this->Images->size())
{
std::swap(this->Images->at(i), this->Images->at(n_next));
*this->CurrentImage = this->Images->front();
centerImage();
ImGui::ResetMouseDragDelta();
}
}
ImGui::Separator();
}
The problem lies at !ImGui::IsItemHovered(), there is small spacing between the lines (cell, selectable,... ), so when the mouse hovers over that spacing, the item isn't hovered but still is actived, and therefore will execute the swap and reset mouse delta multiple times making it goes to the top or bottom of the list. This will also happen if the mouse goes out of the table/window bounds.
To make the problem more visible, you can make the spacing bigger using ImGui::GetStyle().ItemSpacing.y = 50.f;.
To actually fix the problem, you'll have to calculate the item index using the mouse position, here is a way to do it, tho not perfect but it works.
ImGuiStyle& style = ImGui::GetStyle();
ImVec2 windowPosition = ImGui::GetWindowPos();
ImVec2 cursorPosition = ImGui::GetCursorPos();
// this is not a pixel perfect position
// you can try to make it more accurate by adding some offset
ImVec2 itemPosition (
windowPosition.x + cursorPosition.x,
windowPosition.y + cursorPosition.y - style.ItemSpacing.y
);
for (int i = 0; i < this->Images->size(); i++) {
ImGui::Image((void*)(intptr_t)this->Images->at(i).texture, ImVec2(100 * temp_percentage, 100 * temp_percentage));
ImGui::SameLine();
ImGui::Selectable(this->Images->at(i).name.c_str());
if (ImGui::IsItemActive() && ImGui::IsMouseDragging(0))
{
int n_next = floorf((ImGui::GetMousePos().y - itemPosition.y) / itemHeight);
if (n_next != i && n_next >= 0 && n_next < this->Images->size())
{
std::swap(this->Images->at(i), this->Images->at(n_next));
*this->CurrentImage = this->Images->front();
centerImage();
}
}
ImGui::Separator();
}
There is also another problem in your code, if there are multiple items with the same name, ImGui::IsItemActive() will return true for all of them if one is actived.
You can fix this easily by adding ##some_unique_string after the name, for example ImGui::Selectable("Image#image_1") will just display Image.

Am I checking for a white pixel correctly?

Cross posting as this may be more of a C++ question than a robotics one.
I am currently going through all the pixels in an image to determine what is a white pixel. I then have to decide where to drive the bot. I am also using sensor_msgs/Image.msg that I get from the /camera/rgb/image_raw channel.
However, I can't seem to locate any white image with the code but the RGBa values I set in my model in gazebo all have value 1 as shown in the image below the code .
I logged all my values(more than once) with ROS_INFO_STREAM but no values are 255, let alone 3 consecutive ones.
void process_image_callback(const sensor_msgs::Image img)
{
const int white_pixel = 255;
const int image_slice_width = img.step / 3;
int j = 0;
bool found = false;
for (int i = 0; not found and i < img.height; i++)
{
for (j; j < img.step-3; j += 3)
{
if (img.data[i*img.step + j] == white_pixel)
{
ROS_INFO_STREAM("img.data[i*img.step + (j + 0)]" + std::to_string(img.data[i*img.step + (j + 0)]));
ROS_INFO_STREAM("img.data[i*img.step + (j + 1)]" + std::to_string(img.data[i*img.step + (j + 1)]));
ROS_INFO_STREAM("img.data[i*img.step + (j + 2)]" + std::to_string(img.data[i*img.step + (j + 2)]));
}
// img.data only has one index
if (img.data[i*img.step + j ] == white_pixel and
img.data[i*img.step + (j + 1)] == white_pixel and
img.data[i*img.step + (j + 2)] == white_pixel)
{
found = true;
break;
}
}
ROS_INFO_STREAM("End of j loop");
}
if (found)
{
// go left, forward or right
}
else
{
// no white pixel seen so stop the bot
}
}
I'd suggest making your own custom structures such as these: The code below is not perfect syntax where it is only pseudo code to illustrate the overall concept.
Having your own custom classes and structures allows you to parse various file types of different image formats into a data structure format that is designed to work with your application.
Here, you would have a custom color structure that can be templated so that the color components can be either <integral> or <floating> types... Then having an external function that takes a Color object will check to see if it meets the criteria of being a white Color object. If the r,g,b color channels are indeed 255 or 1.0 and the alpha channel is not 0, then the pixel should be white!
I also provided specializations of the function template to work with both <integral> and <floating> type Color objects. Again the syntax isn't perfect as it is only pseudo-code but to implement these classes and structures should be trivial.
If the color encoding is not "RGBA" then you will have to figure out the actual encoding of the color channels and covert it to be in an "RGBA" format! Once this is done, we can then apply this to any pixel data.
template<typename T>
struct Color {
T r;
T g;
T b;
T a;
Color() : r{0}, g{0}, b{0}, a{1}
{}
Color(T red, T green, T blue, T alpha = 1) :
r{red}, g{green}, b{blue}, a{alpha}
{}
Color(T& red, T& green T& blue, T& alpha = 1) :
r{red}, g{green}, b{blue}, a{alpha}
{}
};
template<typename T, std::is_integral<T>>
bool isWhite(const Color<T>& color) {
return ( (color.r == 255) && (color.g == 255) &&
(color.b == 255) && (color.a != 0) );
}
template<typename T, std::is_floating<T>>
bool isWhite(const Color<T>& color) {
return ( (color.r == 1.0) && (color.g == 1.0) &&
(color.b == 1.0) && (color.a != 0.0) );
}
class Image {
private:
std::string filename_;
std::string encoding_;
uint32_t width_;
uint32_t height_;
uint32_t step_;
uint8_t bigEndian_;
std::vector<Color> pixelData_; // Where Color structures are populated
// from the file's `uint8[] data` matrix.
public:
Image(const std::string& filename) {
// Open The File parse it's contents from the header information
// and populate your internal data structure with the needed
// information from the file.
}
uint32_t width() const { return width_; }
uint32_t height() const { return height_; }
uint32_t stride() const { return step_; }
std::string getEncoding() const { return encoding_; }
std::string getFilename() const { return filename_; }
std::vector<Color> getPixelData() const { return pixelData_; }
};
Then somewhere else in your code where you are processing the information about the pixel data from the image.
void processImage(const Image& image) {
for (auto& pixel : image.getPixelData() ) {
if ( isWhite( pixel ) ) {
// Do something
} else {
// Do Something different
}
}
}
This should make it easier to work with since the Image object is of your own design. The hardest part would be to write the file loader - parser to obtain all of the information from their file format and to convert it to one of your own...
I've done this quite a bit since I work with 3D graphics using DirectX, OpenGL, and now Vulkan. In the beginning, I never relied on 3rd party libraries to load in image or texture files, I originally wrote my own loaders and parsers to accept TGAs, BMPs, PNGs, etc. and I had a single Texture or Image class that can be created from any of those file formats, file types.
This might help you out in the long run. What if you want to extend your application to use different "Cameras"? Now all you would have to do is just o write different file loaders for the next camera type. Parse it's data structures and convert it into your own custom data structure and format. Now you will end up having a plug and play system so too speak! You can easily extend the supported types your application can use.

tbb increment number of vector element without using mutex

Currently I am working on paralizing an image processing algorithm to extract edges from a given image. I recently started with code parallelizing.
Anyway a part of the program requires me to compute the histogram of the image and count the number of occurding pixels from 1 to its maximum gradient Intensity.
I have implemented it as the following:
tbb::concurrent_vector<double> histogram(32768);
tbb::parallel_for(tbb::blocked_range<size_t>(1, width - 1),
[&](const tbb::blocked_range<size_t>& r)
{
unsigned int idx;
for (size_t w = r.begin(); w != r.end(); ++w) //1 to (width -1)
{
for (size_t h = 1; h < height - 1; ++h)
{
idx = h * width + w;
//DO SOME STUFF BEFORE
//Get max gradient intensity
if (pgImg[idx] > maxGradIntensity)
{
maxGradIntensity = pgImg[idx];
}
//Get histogram information
if (pgImg[idx] > 0)
{
tbb::mutex::scoped_lock sync(locked);
++histogram[(int)pgImg[idx]];
++totalGradPixels;
}
}
}
});
histogram.resize(maxGradIntensity);
So the part where it becomes tricky for me is the following:
if (pgImg[idx] > 0)
{
tbb::mutex::scoped_lock sync(locked);
++histogram[(int)pgImg[idx]];
++totalGradPixels;
}
How can I avoid using tbb::mutex? I had no luck with setting the vector to tbb::atomic. Maybe I did something wrong there. Any help on this topic would be appreciated.

DirectShow ISampleGrabber: samples are upside-down and color channels reverse

I have to use MS DirectShow to capture video frames from a camera (I just want the raw pixel data).
I was able to build the Graph/Filter network (capture device filter and ISampleGrabber) and implement the callback (ISampleGrabberCB). I receive samples of appropriate size.
However, they are always upside down (flipped vertically that is, not rotated) and the color channels are BGR order (not RGB).
I tried setting the biHeight field in the BITMAPINFOHEADER to both positive and negative values, but it doesn't have any effect. According to MSDN documentation, ISampleGrapper::SetMediaType() ignores the format block for video data anyways.
Here is what I see (recorded with a different camera, not DS), and what DirectShow ISampleGrabber gives me: The "RGB" is actually in red, green and blue respectively:
Sample of the code I'm using, slightly simplified:
// Setting the media type...
AM_MEDIA_TYPE* media_type = 0 ;
this->ds.device_streamconfig->GetFormat(&media_type); // The IAMStreamConfig of the capture device
// Find the BMI header in the media type struct
BITMAPINFOHEADER* bmi_header;
if (media_type->formattype != FORMAT_VideoInfo) {
bmi_header = &((VIDEOINFOHEADER*)media_type->pbFormat)->bmiHeader;
} else if (media_type->formattype != FORMAT_VideoInfo2) {
bmi_header = &((VIDEOINFOHEADER2*)media_type->pbFormat)->bmiHeader;
} else {
return false;
}
// Apply changes
media_type->subtype = MEDIASUBTYPE_RGB24;
bmi_header->biWidth = width;
bmi_header->biHeight = height;
// Set format to video device
this->ds.device_streamconfig->SetFormat(media_type);
// Set format for sample grabber
// bmi_header->biHeight = -(height); // tried this for either and both interfaces, no effect
this->ds.sample_grabber->SetMediaType(media_type);
// Connect filter pins
IPin* out_pin= getFilterPin(this->ds.device_filter, OUT, 0); // IBaseFilter interface for the capture device
IPin* in_pin = getFilterPin(this->ds.sample_grabber_filter, IN, 0); // IBaseFilter interface for the sample grabber filter
out_pin->Connect(in_pin, media_type);
// Start capturing by callback
this->ds.sample_grabber->SetBufferSamples(false);
this->ds.sample_grabber->SetOneShot(false);
this->ds.sample_grabber->SetCallback(this, 1);
// start recording
this->ds.media_control->Run(); // IMediaControl interface
I'm checking return types for every function and don't get any errors.
I'm thankful for any hint or idea.
Things I already tried:
Setting the biHeight field to a negative value for either the capture device filter or the sample grabber or for both or for neither - doesn't have any effect.
Using IGraphBuilder to connect the pins - same problem.
Connecting the pins before changing the media type - same problem.
Checking if the media type was actually applied by the filter by querying it again - but it apparently is applied or at least stored.
Interpreting the image as total byte reversed (last byte first, first byte last) - then it would be flipped horizontally.
Checking if it's a problem with the video camera - when I test it with VLC (DirectShow capture) it looks normal.
My quick hack for this:
void Camera::OutputCallback(unsigned char* data, int len, void *instance_)
{
Camera *instance = reinterpret_cast<Camera*>(instance_);
int j = 0;
for (int i = len-4; i > 0; i-=4)
{
instance->buffer[j] = data[i];
instance->buffer[j + 1] = data[i + 1];
instance->buffer[j + 2] = data[i + 2];
instance->buffer[j + 3] = data[i + 3];
j += 4;
}
Transport::RTPPacket packet;
packet.payload = instance->buffer;
packet.payloadSize = len;
instance->receiver->Send(packet);
}
It's correct on RGB32 color space, for other color spaces this code need to be corrected
I noticed that when using the I420 color space turning disappears.
In addition, most current codecs (VP8) is used as a format raw I/O I420 color space.
I wrote a simple mirroring frame function in color space I420.
void Camera::OutputCallback(unsigned char* data, int len, uint32_t timestamp, void *instance_)
{
Camera *instance = reinterpret_cast<Camera*>(instance_);
Transport::RTPPacket packet;
packet.rtpHeader.ts = timestamp;
packet.payload = data;
packet.payloadSize = len;
if (instance->mirror)
{
Video::ResolutionValues rv = Video::GetValues(instance->resolution);
int k = 0;
// Chroma values
for (int i = 0; i != rv.height; ++i)
{
for (int j = rv.width; j != 0; --j)
{
int l = ((rv.width * i) + j);
instance->buffer[k++] = data[l];
}
}
// U values
for (int i = 0; i != rv.height/2; ++i)
{
for (int j = (rv.width/2); j != 0; --j)
{
int l = (((rv.width / 2) * i) + j) + rv.height*rv.width;
instance->buffer[k++] = data[l];
}
}
// V values
for (int i = 0; i != rv.height / 2; ++i)
{
for (int j = (rv.width / 2); j != 0; --j)
{
int l = (((rv.width / 2) * i) + j) + rv.height*rv.width + (rv.width/2)*(rv.height/2);
if (l == len)
{
instance->buffer[k++] = 0;
}
else
{
instance->buffer[k++] = data[l];
}
}
}
packet.payload = instance->buffer;
}
instance->receiver->Send(packet);
}

Allegro 5 drawing a bitmap over a primitive

I've recently tried making an inventory system in allegro 5, where I draw a grid of squares 20x20 and drag-drop items around. The problem is, I can see the item sprite going under the actual grid that I drew, which is an unwanted effect. Here's my code:
if(draw)
{
draw = false;
al_draw_bitmap(image, item.posx, item.posy, 0);
if(mouseKey)
{
grab = true;
item.posx = mouse.posx - (item.boundx-5);
item.posy = mouse.posy - (item.boundy-5);
}
else if(mouseKey == false && grab == true)
{
for(int i = 0; i < mouse.posx; i += 20)
{
if(i < mouse.posx)
item.posx = i;
}
for(int j = 0; j < mouse.posy; j += 20)
{
if(j < mouse.posy)
{
item.posy = j;
}
}
grab = false;
}
for(int i = 0; i <= width; i += 20)
{
al_draw_line(i, 0, i, height, al_map_rgb(0, 0, 0), 1);
al_draw_line(0, i, width, i, al_map_rgb(0, 0, 0), 1);
}
al_flip_display();
al_clear_to_color(al_map_rgb(40,40,40));
}
(I know it's terribly written and un-optimized but I wrote it in about 10 minutes simply as a test)
How can I make it so the item sprite does not display the lines over it? Here's an example of my problem if I was too vague:
I'm using Codeblocks IDE on windows XP
Unless you fiddle with OpenGL settings, you're going to always get the things you draw last on top. So in this case, simply move al_draw_bitmap(image, item.posx, item.posy, 0); to be directly above al_flip_display().
Note that you will have some problems because you are manipulating item.posx and item.posy in that section, so you'd have to first cache the results:
int x = item.posx;
int y = item.posy;
// ...
al_draw_bitmap(image, x, y, 0);
al_flip_display();
However, that's just a bandaid over the larger problem: you shouldn't be changing anything inside your drawing block. The entire if/else block should be elsewhere. i.e.:
if (event timer is a game tick)
{
do all logic stuff
draw = true
}
if (draw)
{
do all drawing stuff
draw = false;
}