I have an enumeration of just under 32 absolute rectangle sizes and I need to given dimensions and find the best approximation among my enumeration.
Is there any better (ie more readable and maintainable) way than the spaghetti code I am formulating out of lots of nested if's and else's?
At the moment I have just:
enum imgOptsScale {
//Some relative scales
w005h005 = 0x8,
w010h010 = 0x9,
w020h020 = 0xA,
w040h040 = 0xB,
w070h070 = 0xC,
w100h100 = 0xD,
w150h150 = 0xE,
w200h200 = 0xF,
w320h320 = 0x10,
w450h450 = 0x11,
w200h010 = 0x12,
w200h020 = 0x13,
w200h070 = 0x14,
w010h200 = 0x15,
w020h200 = 0x16,
w070h200 = 0x17
};
imgOptsScale getClosestSizeTo(int width, int height);
and I thought I'd ask for help before I got too much further into coding up. I should emphasise a bias away from too elaborate libraries though I am more interested in algorithms than containers this is supposed to run on a resource constrained system.
I think I'd approach this with a few arrays of structs, one for horizontal measures and one for vertical measures.
Read through the arrays to find the next larger size, and return the corresponding key. Build the final box measure from the two keys. (Since 32 only allows 5 bits, this is probably not very ideal -- you'd probably want 2.5 bits for the horizontal and 2.5 bits for the vertical, but my simple approach here requires 6 bits -- 3 for horizontal and 3 for vertical. You can remove half the elements from one of the lists (and maybe adjust the << 3 as well) if you're fine with one of the dimensions having fewer degrees of freedom. If you want both dimensions to be better represented, this will probably require enough re-working that this approach might not be suitable.)
Untested pseudo-code:
struct dimen {
int x;
int key;
}
struct dimen horizontal[] = { { .x = 10, .key = 0 },
{ .x = 20, .key = 1 },
{ .x = 50, .key = 2 },
{ .x = 90, .key = 3 },
{ .x = 120, .key = 4 },
{ .x = 200, .key = 5 },
{ .x = 300, .key = 6 },
{ .x = 10000, .key = 7 }};
struct dimen vertical[] = { { .x = 10, .key = 0 },
{ .x = 20, .key = 1 },
{ .x = 50, .key = 2 },
{ .x = 90, .key = 3 },
{ .x = 120, .key = 4 },
{ .x = 200, .key = 5 },
{ .x = 300, .key = 6 },
{ .x = 10000, .key = 7 }};
/* returns 0-63 as written */
int getClosestSizeTo(int width, int height) {
int horizontal_key = find_just_larger(horizontal, width);
int vertical_key = find_just_larger(vertical, height);
return (horizontal_kee << 3) & vertical_key;
}
int find_just_larger(struct dimen* d, size) {
int ret = d.key;
while(d.x < size) {
d++;
ret = d.key;
}
return ret;
}
Yes ... place your 32 different sizes in a pre-built binary search tree, and then recursively search through the tree for the "best" size. Basically you would stop your search if the left child pre-built rectangle of the current node's rectangle is smaller than your input rectangle, and the current node's rectangle is larger than the input rectangle. You would then return the pre-defined rectangle that is "closest" to your input rectangle between the two.
One nice addition to the clean code the recursive search creates is that it would also be logarithmic rather than linear in search time.
BTW, you will want to randomize the order that you insert the initial pre-defined rectangle values into the binary search tree, otherwise you will end up with a degenerate tree that looks like a linked list, and you won't get logarithmic search time since the height of the tree will be the number of nodes, rather than logarithmic to the number of nodes.
So for instance, if you've sorted the tree by the area of your rectangles (provided there are no two rectangles with the same area), then you could do something like the following:
//for brevity, find the rectangle that is the
//greatest rectangle smaller than the input
const rec_bstree* find_best_fit(const rec_bstree* node, const rec& input_rec)
{
if (node == NULL)
return NULL;
rec_bstree* return_node;
if (input_rec.area < node->area)
return_node = find_best_fit(node->left_child, input_rec);
else if (input_rec.area > node->area)
return_node = find_best_fit(node->right_child, input_rec);
if (return_node == NULL)
return node;
}
BTW, if a tree is too complex, you could also simply do an array or std::vector of instances of your rectangles, sort them using some type of criteria using std::sort, and then do binary searched on the array.
Here is my proposed solution,
enum imgOptsScale {
notScaled = 0x0,
//7 relative scales upto = 0x7
w010h010, w010h025, w010h060, w010h120, w010h200, w010h310, w010h450,
w025h010, w025h025, w025h060, w025h120, w025h200, w025h310, w025h450,
w060h010, w060h025, w060h060, w060h120, w060h200, w060h310, w060h450,
w120h010, w120h025, w120h060, w120h120, w120h200, w120h310, w120h450,
w200h010, w200h025, w200h060, w200h120, w200h200, w200h310, w200h450,
w310h010, w310h025, w310h060, w310h120, w310h200, w310h310, w310h450,
w450h010, w450h025, w450h060, w450h120, w450h200, w450h310, w450h450,
w730h010, w730h025, w730h060, w730h120, w730h200, w730h310, w730h450
};
//Only call if width and height are actually specified. else 0=>10px
imgOptsScale getClosestSizeTo(int width, int height) {
static const int possSizes[] = {10, 25, 60, 120, 200, 310, 450, 730};
static const int sizesHalfways[] = {17, 42, 90, 160, 255, 380, 590};
int widthI = 6;
while (sizesHalfways[widthI - 1] > width && --widthI>0);
int heightI = 6;
while (sizesHalfways[heightI - 1] > height && --heightI>0);
return (imgOptsScale)(8 + 7 * widthI + heightI);
}
Related
Here I ask you : How are we supposed to colorize the console background with only the COLOREF datatype as a parameter?
The most common way of colorizing background is by using windows header function system("color --")
However, this way is not possible, and I am tasked to find out if we can colorize the console background using only the COLOREF datatype.
I did some research, and what I came across was SetConsoleAttribute(), and the windows header function system("color --").
This is what I expect my code to be:
COLOREF data = RGB(255, 0, 0);//red, basically
SetConsoleBackground(HDC *console, data);
Any way of doing this? Thanks in advance.
[NEW ANSWER (edit)]
So #IInspectable pointed out the the console now supports 24-bit full rgb colors so i did some research and managed to make it work.
This is how i solved it:
#include <Windows.h>
#include <string>
struct Color
{
int r;
int g;
int b;
};
void SetBackgroundColor(const Color& aColor)
{
std::string modifier = "\x1b[48;2;" + std::to_string(aColor.r) + ";" + std::to_string(aColor.g) + ";" + std::to_string(aColor.b) + "m";
printf(modifier.c_str());
}
void SetForegroundColor(const Color& aColor)
{
std::string modifier = "\x1b[38;2;" + std::to_string(aColor.r) + ";" + std::to_string(aColor.g) + ";" + std::to_string(aColor.b) + "m";
printf(modifier.c_str());
}
int main()
{
// Set output mode to handle virtual terminal sequences
HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE);
DWORD dwMode = 0;
GetConsoleMode(hOut, &dwMode);
dwMode |= ENABLE_VIRTUAL_TERMINAL_PROCESSING;
SetConsoleMode(hOut, dwMode);
SetForegroundColor({ 100,100,20 });
SetBackgroundColor({ 50,100,10 });
printf("Hello World\n");
system("pause");
}
[OLD ANSWER]
The console only supports 256 different color combinations defined with a WORD which is 8 bits long. The background color is stored in the 4 higher bits. This means the console only has support for 16 different colors:
enum class Color : int
{
Black = 0,
DarkBlue = 1,
DarkGreen = 2,
DarkCyan = 3,
DarkRed = 4,
DarkPurple = 5,
DarkYellow = 6,
DarkWhite = 7,
Gray = 8,
Blue = 9,
Green = 10,
Cyan = 11,
Red = 12,
Purple = 13,
Yellow = 14,
White = 15,
};
To set the background color of the typed characters, you could do:
void SetWriteColor(const Color& aColor)
{
HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
SetConsoleTextAttribute(hConsole, static_cast<WORD>(aColor) << 4);
}
I need to create a linear gradient brush to paint a surface with GDI+. On the input, I receive the following parameters:
Viewport rect: x = 0, y = 0, width = 744.09448819f, height = 1052.3622047f
Fill rect: x = 13.040037f, y = 17.478735f, width = 721.3703f, height = 1009.1535f
Gradient start: x = 384.8, y = 611.46
Gradient stop: x = 378.93, y = 474.96
Start color: a = 255, r = 179, g = 221, b = 253 (between 0 and 255)
End color: a = 255, r = 51, g = 102, b = 152 (between 0 and 255)
Matrix to apply: a = 7.7430985, b = 0, c = 0, d = 7.2926249, e = -2439.6639, f = -3446.2263
Wrap mode: clamp
These values were extracted from a SVG file. When I open it with my browser, I get the following result:
https://drive.google.com/open?id=0B7C-RwCTA9qaNnFsaGJpenlseGc
Now I try to draw the exact same gradient with GDI+. I use Embarcadero RAD Studio c++ builder to do that. I wrote the following code:
void DrawGradient()
{
std::auto_ptr<TBitmap> pBitmap(new TBitmap());
pBitmap->Width = 744.09448819f;
pBitmap->Height = 1052.3622047f;
pBitmap->Canvas->Brush->Color = clWhite;
pBitmap->Canvas->FillRect(TRect(0, 0, pBitmap->Width, pBitmap->Height));
Gdiplus::Graphics graphics(pBitmap->Canvas->Handle);
Gdiplus::LinearGradientBrush brush(
Gdiplus::PointF(384.8, 611.46),
Gdiplus::PointF(378.93, 474.96),
Gdiplus::Color(255, 179, 221, 253),
Gdiplus::Color(255, 51, 102, 152)
);
Gdiplus::Matrix matrix(7.7430985, 0, 0, 7.2926249, -2439.6639, -3446.2263);
brush.SetTransform(&matrix);
graphics.FillRectangle(&brush, 13.040037f, 17.478735f, 721.3703f, 1009.1535f);
pBitmap->SaveToFile(L"test.bmp");
}
However I get this result:
https://drive.google.com/open?id=0B7C-RwCTA9qaU1BhSTJxay16Z00
Putting aside the question of clamping, for which I know that GDI + offers no simple solution (like e.g. putting the Gdiplus::WrapModeClamp in the SetWrapMode() function, that could be so easy for the users), the resulting drawing should at least be closer to the expected result, or I'm wrong?
Somebody can explain to me why I get a so different result?
NOTE I already tweaked all the parameters, I obtain just several variations of the same wrong result
NOTE The original SVG that I refer above can be obtained here:
https://drive.google.com/open?id=0B7C-RwCTA9qaYkNRQ3lJNC1fNmM
Regards
Hi I'm trying to make an output with WriteConsoleOutputA.
I have this code:
CHAR_INFO letterA;
letterA.Char.AsciiChar = 'A';
letterA.Attributes =
FOREGROUND_RED | FOREGROUND_INTENSITY |
BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_INTENSITY;
//Set up the positions:
COORD charBufSize = { 1, 1};
COORD characterPos = { 0, 0 };
SMALL_RECT writeArea = { 0,0,0,0 };
//Write the character
WriteConsoleOutputA(wHnd, &letterA, charBufSize, characterPos, &writeArea);
So at this point it writes a red A with a yellow background, but for example, if I want the A appear in the coordinates (5,5) it doesn't print it even if I change SMALL_RECT to {0, 0, 10, 10}.
Or if I want to write another A right side to the first one with this:
WriteConsoleOutputA(wHnd, &letterA, charBufSize, characterPos, &writeArea);
WriteConsoleOutputA(wHnd, &letterA, charBufSize, { 0, 1 }, &writeArea);
I´m beginning with this graphical console mode, it would be very helpful if someone could tell me how to print that character in the coordinate that I want.
I have tried to change it , changing the coordinates something like this:
COORD charBufSize = { 5, 10};
COORD characterPos = { 3, 2 };
SMALL_RECT writeArea = { 0,0,5,10 };
but it prints weird characters and other colours in all the buffer 5*10.
Thanks
César.
WriteConsoleOutput(..) is a complex function which needs to be handled carefully.
The dwBufferSize parameter (= your charBufSize) is nothing more than a size specification of the lpBuffer parameter (= your letterA). The only difference instead of simply telling that letterA has a size of 1 is that by splitting it into two axis you are able specify the width and height of a text block with letterA characters in it. But remember that the size of letterA has to be charBufSize.X * charBufSize.Y. Otherwise WriteConsoleOutput will do weird stuff since it uses uninitialized memory.
The dwBufferCoord parameter (= your characterPos) defines the location within letterA from where to read the characters to be written to the console. So it simply defines an index offset. In your example this should always be { 0, 0 } (which is equal to letterA[0]) since letterA is only a single character.
The lpWriteRegion parameter (= your writeArea) does all the magic. It specifies the position, width and height of the area to be written by the call. The data to be written is definedby the previous parameters.
So to write a character to a specific location x, y do the following:
COORD charBufSize = {1, 1};
COORD characterPos = {0, 0};
SMALL_RECT writeArea = {x, y, x, y};
WriteConsoleOutputA(wHnd, &letterA, charBufSize, characterPos, &writeArea);
For a little better understanding use the following example and play a little with the values of charBufSize, characterPos and writeArea:
int i;
CHAR_INFO charInfo[10 * 10];
/* play with these values */
COORD charBufSize = {10, 10}; /* do not exceed x*y=100 !!! */
COORD characterPos = {5, 0}; /* must be within 0 and x*y=100 */
SMALL_RECT writeArea = {2, 2, 12, 12};
for (i = 0; i < (10 * 10); i++)
{
charInfo[i].Char.AsciiChar = 'A' + (i % 26);
charInfo[i].Attributes = FOREGROUND_RED | FOREGROUND_INTENSITY | BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_INTENSITY;
}
WriteConsoleOutputA(wHnd, charInfo, charBufSize, characterPos, &writeArea);
Here is a screenshot of the parameters in the example above showing the console and the variables. I hope this makes it a bit more clear.
I'm trying to use the following library here (the templated version) but in the example shown in the library the user defines the bounding boxes. In my problem I have data of unknown dimensionality each time, so I don't know how to use it. Apart from this, shouldn't the R-Tree be able to calculate the bounding boxes each time there is an insertion?
This is the sample code of the library, as you can see the user defines the bounding boxes each time:
#include <stdio.h>
#include "RTree.h"
struct Rect
{
Rect() {}
Rect(int a_minX, int a_minY, int a_maxX, int a_maxY)
{
min[0] = a_minX;
min[1] = a_minY;
max[0] = a_maxX;
max[1] = a_maxY;
}
int min[2];
int max[2];
};
struct Rect rects[] =
{
Rect(0, 0, 2, 2), // xmin, ymin, xmax, ymax (for 2 dimensional RTree)
Rect(5, 5, 7, 7),
Rect(8, 5, 9, 6),
Rect(7, 1, 9, 2),
};
int nrects = sizeof(rects) / sizeof(rects[0]);
Rect search_rect(6, 4, 10, 6); // search will find above rects that this one overlaps
bool MySearchCallback(int id, void* arg)
{
printf("Hit data rect %d\n", id);
return true; // keep going
}
void main()
{
RTree<int, int, 2, float> tree;
int i, nhits;
printf("nrects = %d\n", nrects);
for(i=0; i<nrects; i++)
{
tree.Insert(rects[i].min, rects[i].max, i); // Note, all values including zero are fine in this version
}
nhits = tree.Search(search_rect.min, search_rect.max, MySearchCallback, NULL);
printf("Search resulted in %d hits\n", nhits);
// Iterator test
int itIndex = 0;
RTree<int, int, 2, float>::Iterator it;
for( tree.GetFirst(it);
!tree.IsNull(it);
tree.GetNext(it) )
{
int value = tree.GetAt(it);
int boundsMin[2] = {0,0};
int boundsMax[2] = {0,0};
it.GetBounds(boundsMin, boundsMax);
printf("it[%d] %d = (%d,%d,%d,%d)\n", itIndex++, value, boundsMin[0], boundsMin[1], boundsMax[0], boundsMax[1]);
}
// Iterator test, alternate syntax
itIndex = 0;
tree.GetFirst(it);
while( !it.IsNull() )
{
int value = *it;
++it;
printf("it[%d] %d\n", itIndex++, value);
}
getchar(); // Wait for keypress on exit so we can read console output
}
An example of what I want to save in an R-Tree is:
-------------------------------
| ID | dimension1 | dimension2|
-------------------------------
| 1 | 8 | 9 |
| 2 | 3 | 5 |
| 3 | 2 | 1 |
| 4 | 6 | 7 |
-------------------------------
Dimensionality
There will be some limit in your requirements to the dimensionality. This is because computers only have infinite storage so cannot store an infinite number of dimensions. Really it is a decision for you how many dimensions you wish to support. The most common numbers of course are two and three. Do you actually need to support eleven? When are you going to use it?
You can do this either by always using an R-tree with the maximum number you support, and passing zero as the other coordinates, or preferably you would create several code paths, one for each supported number of dimensions. I.e. you would have one set of routines for two-dimensional data and another for three dimensional, and so on.
Calculating the bounding box
The bounding box is the rectangle or cuboid which is aligned to the axes, and completely surrounds the object you wish to add.
So if you are inserting axis-aligned rectangles/cuboids etc, then the shape is the bounding box.
If you are inserting points, the min and max of each dimension are just the point value of that dimension.
Any other shape, you have to calculate the bounding box. E.g. if you are inserting a triangle, you need to calculate the rectangle which completely surrounds the triangle as the bounding box.
The library can't do this for you because it doesn't know what you are inserting. You might be inserting spheres stored as centre + radius, or complex triangle mesh shapes. The R-Tree can provide the spatial index but needs you to provide that little bit of information to fill in the gaps.
So, I have an XImage and i was able to store it in the filesystem, but the image didn't have cursor in it. On further reserch, I found that XOrg has a fix for this, using extension Xfixes.h
The function XFixesGetCursorImage(display) returns a structure XFixesCursorImage
typedef struct {
short x, y;
unsigned short width, height;
unsigned short xhot, yhot;
unsigned long cursor_serial;
unsigned long *pixels;
#if XFIXES_MAJOR >= 2
Atom atom; /* Version >= 2 only */
const char *name; /* Version >= 2 only */
#endif
} XFixesCursorImage;
I believed that unsigned long *pixel is an array that will contain pixel by pixel information of the entire image(of the cursor and rest of the background will be valued 0).
Then using the steps given in this article, I would merge my original XImage with that of cursor(I hope, i have the right idea).
My Problem :
To effectively do the entire masking thing, the first thing is i need to have all the pixel values in XFixesCursorImage. But i believe that pixel array contain way too less values, because, my screen size is 1366 X 768 so i believe there should be 1366*768 elements in pixel array (each containng a long value of pixel in ARGB), but when i used GDB and tried to find the last element it was 21272(21273 elements in total)
Using GDB
(gdb) print cursor[0]
$22 = {x = 475, y = 381, width = 24, height = 24, xhot = 11, yhot = 11,
cursor_serial = 92, pixels = 0x807f39c, atom = 388, name = 0x807fc9c "xterm"}
(gdb) print cursor[0]
$22 = {x = 475, y = 381, width = 24, height = 24, xhot = 11, yhot = 11,
cursor_serial = 92, pixels = 0x807f39c, atom = 388, name = 0x807fc9c "xterm"}
(gdb) print cursor->pixels[21273]
Cannot access memory at address 0x8094000
Few More Data
(gdb) print cursor[0]
$5 = {x = 1028, y = 402, width = 1, height = 1, xhot = 1, yhot = 1,
cursor_serial = 120, pixels = 0x807e854, atom = 0, name = 0x807e858 ""}
(gdb) print cursor[0]->pixels[21994]
$8 = 0
(gdb) print cursor[0]->pixels[21995]
Cannot access memory at address 0x8094000
Am i missing something? Because no of elements doesn't make sense?
Which brings me to a very important question
How are data structured in both XImage->data and XFixesCursorImage->pixels?
XFixesCursorImage stores ONLY the cursor image, NOT the entire screen. So -as Andrey says- you can only access 24x24 unsigned longs.
You can place the Cursor Image on your XImage by using the fields x,y on the XFixesCursorImage, but remember that the pixel format of the XImage might differ from that of the XFixesCursorImage, which is always 32 bits per pixel ARGB.
Said that, please note that unsigned long can actually be 64 bits if compiling for x86_64, so your conversions should use unsigned long to be portable and not assume it will be 32 bits.
Placing example (with funcs missing but good enough for the explanation):
unsigned char r,g,b,a;
unsigned short row,col,pos;
for(pos = row = 0;row<img->height; row++)
{
for(col=0;col < img->width;col++,pos++)
{
a = (unsigned char)((img->pixels[pos] >> 24) & 0xff);
r = (unsigned char)((img->pixels[pos] >> 16) & 0xff);
g = (unsigned char)((img->pixels[pos] >> 8) & 0xff);
b = (unsigned char)((img->pixels[pos] >> 0) & 0xff);
put_pixel_in_ximage(img->x+col,img->y+row, convert_to_ximage_pixel(r,g,b,a));
}
}
Notes: img in the code is the XFixesCursorImage, and do not trust the field 'cursor_serial' in order to determine if cursors are different of each other, because sometimes this field is just 0. Not sure why.
pixels contain 32-bit per pixel pixmap, in your case 24x24*4=2304 bytes.
From protocol docs:
The cursor image itself is returned as a single image at 32 bits per
pixel with 8 bits of alpha in the most significant 8 bits of the pixel
followed by 8 bits each of red, green and finally 8 bits of blue in
the least significant 8 bits. The color components are pre-multiplied
with the alpha component.