How can I pixelate a 1d array - c++

I want to pixelate an image stored in a 1d array, although i am not sure how to do it, this is what i have comeup with so far...
the value of pixelation is currently 3 for testing purposes.
currently it just creates a section of randomly coloured pixels along the left third of the image, if i increase the value of pixelation the amount of random coloured pixels decreases and vice versa, so what am i doing wrong?
I have also already implemented the rotation, reading of the image and saving of a new image this is just a separate function which i need assistance with.
picture pixelate( const std::string& file_name, picture& tempImage, int& pixelation /* TODO: OTHER PARAMETERS HERE */)
{
picture pixelated = tempImage;
RGB tempPixel;
tempPixel.r = 0;
tempPixel.g = 0;
tempPixel.b = 0;
int counter = 0;
int numtimesrun = 0;
for (int x = 1; x<tempImage.width; x+=pixelation)
{
for (int y = 1; y<tempImage.height; y+=pixelation)
{
//RGB tempcol;
//tempcol for pixelate
for (int i = 1; i<pixelation; i++)
{
for (int j = 1; j<pixelation; j++)
{
tempPixel.r +=tempImage.pixel[counter+pixelation*numtimesrun].colour.r;
tempPixel.g +=tempImage.pixel[counter+pixelation*numtimesrun].colour.g;
tempPixel.b +=tempImage.pixel[counter+pixelation*numtimesrun].colour.b;
counter++;
//read colour
}
}
for (int k = 1; k<pixelation; k++)
{
for (int l = 1; l<pixelation; l++)
{
pixelated.pixel[numtimesrun].colour.r = tempPixel.r/pixelation;
pixelated.pixel[numtimesrun].colour.g = tempPixel.g/pixelation;
pixelated.pixel[numtimesrun].colour.b = tempPixel.b/pixelation;
//set colour
}
}
counter = 0;
numtimesrun++;
}
cout << x << endl;
}
cout << "Image successfully pixelated." << endl;
return pixelated;
}

I'm not too sure what you really want to do with your code, but I can see a few problems.
For one, you use for() loops with variables starting at 1. That's certainly wrong. Arrays in C/C++ start at 0.
The other main problem I can see is the pixelation parameter. You use it to increase x and y without knowing (at least in that function) whether it is a multiple of width and height. If not, you will definitively be missing pixels on the right edge and at the bottom (which edges will depend on the orientation, of course). Again, it very much depends on what you're trying to achieve.
Also the i and j loops start at the position defined by counter and numtimesrun which means that the last line you want to hit is not tempImage.width or tempImage.height. With that you are rather likely to have many overflows. Actually that would also explain the problems you see on the edges. (see update below)
Another potential problem, cannot tell for sure without seeing the structure declaration, but this sum using tempPixel.c += <value> may overflow. If the RGB components are defined as unsigned char (rather common) then you will definitively get overflows. So your average sum is broken if that's the fact. If that structure uses floats, then you're good.
Note also that your average is wrong. You are adding source data for pixelation x pixalation and your average is calculated as sum / pixelation. So you get a total which is pixalation times larger. You probably wanted sum / (pixelation * pixelation).
Your first loop with i and j computes a sum. The math is most certainly wrong. The counter + pixelation * numtimesrun expression will start reading at the second line, it seems. However, you are reading i * j values. That being said, it may be what you are trying to do (i.e. a moving average) in which case it could be optimized but I'll leave that out for now.
Update
If I understand what you are doing, a representation would be something like a filter. There is a picture of a 3x3:
.+. *
+*+ =>
.+.
What is on the left is what you are reading. This means the source needs to be at least 3x3. What I show on the right is the result. As we can see, the result needs to be 1x1. From what I see in your code you do not take that in account at all. (the varied characters represent varied weights, in your case all weights are 1.0).
You have two ways to handle that problem:
The resulting image has a size of width - pixelation * 2 + 1 by height - pixelation * 2 + 1; in this case you keep one result and do not care about the edges...
You rewrite the code to handle edges. This means you use less source data to compute the resulting edges. Another way is to compute the edge cases and save that in several output pixels (i.e. duplicate the pixels on the edges).
Update 2
Hmmm... looking at your code again, it seems that you compute the average of the 3x3 and save it in the 3x3:
.+. ***
+*+ => ***
.+. ***
Then the problem is different. The numtimesrun is wrong. In your k and l loops you save the pixels pixelation * pixelation in the SAME pixel and that advanced by one each time... so you are doing what I shown in my first update, but it looks like you were trying to do what is shown in my 2nd update.
The numtimesrun could be increased by pixelation each time:
numtimesrun += pixelation;
However, that's not enough to fix your k and l loops. There you probably need to calculate the correct destination. Maybe something like this (also requires a reset of the counter before the loop):
counter = 0;
... for loops ...
pixelated.pixel[counter+pixelation*numtimesrun].colour.r = ...;
... (take care of g and b)
++counter;
Yet again, I cannot tell for sure what you are trying to do, so I do not know why you'd want to copy the same pixel pixelation x pixelation times. But that explains why you get data only at the left (or top) of the image (very much depends on the orientation, one side for sure. And if that's 1/3rd then pixelation is probably 3.)
WARNING: if you implement the save properly, you'll experience crashes if you do not take care of the overflows mentioned earlier.
Update 3
As explained by Mark in the comment below, you have an array representing a 2d image. In that case, your counter variable is completely wrong since this is 100% linear whereas the 2d image is not. The 2nd line is width further away. At this point, you read the first 3 pixels at the top-left, then the next 3 pixels on the same, and finally the next 3 pixels still on the same line. Of course, it could be that your image is thus defined and these pixels are really one after another, although it is not very likely...
Mark's answer is concise and gives you the information necessary to access the correct pixels. However, you will still be hit by the overflow and possibly the fact that the width and height parameters are not a multiple of pixelation...

I don't do a lot of C++, but here's a pixelate function I wrote for Processing. It takes an argument of the width/height of the pixels you want to create.
void pixelateImage(int pxSize) {
// use ratio of height/width...
float ratio;
if (width < height) {
ratio = height/width;
}
else {
ratio = width/height;
}
// ... to set pixel height
int pxH = int(pxSize * ratio);
noStroke();
for (int x=0; x<width; x+=pxSize) {
for (int y=0; y<height; y+=pxH) {
fill(p.get(x, y));
rect(x, y, pxSize, pxH);
}
}
}
Without the built-in rect() function you'd have to write pixel-by-pixel using another two for loops:
for (int px=0; px<pxSize; px++) {
for (int py=0; py<pxH; py++) {
pixelated.pixel[py * tempImage.width + px].colour.r = tempPixel.r;
pixelated.pixel[py * tempImage.width + px].colour.g = tempPixel.g;
pixelated.pixel[py * tempImage.width + px].colour.b = tempPixel.b;
}
}

Generally when accessing an image stored in a 1D buffer, each row of the image will be stored as consecutive pixels and the next row will follow immediately after. The way to address into such a buffer is:
image[y*width+x]
For your purposes you want both inner loops to generate coordinates that go from the top and left of the pixelation square to the bottom right.

Related

How I can check that an element in a grid of tiles is on my viewport fast?

I have a for loop that I use to draw a grid of tiles with sdl on a game. Since the grid is quite huge with more than 50k elements I want to optimize it.
So there is this function that use to check if I should draw a tile, so if it's outside of the screen I ignore it.
bool Camera::isInViewport(int &x, int &y, int &w, int &h) {
int translatedX = x + offsetX;
int translatedY = y + offsetY;
if (translatedX + w >= 0 && translatedX <= 0 + sdl.windowWidth) {
if (translatedY + h >= 0 && translatedY <= 0 + sdl.windowHeight) {
return true;
}
}
return false;
}
I checked this function it's eating 15% of the CPU alone when the grid is big. Will be possible to make this faster? I can't think of way that will make it eat less resources.
There is not a lot that you can do with this funciton. Do not pass ints as references, it internally passes them as pointers, and it increases costs by dereferencing them. Merge conditions into one if statement and start from those that most probably will be evaluated into false to make early short-circuiting possible.
What I would do instead to solve this performance issue is to organize your tiles in 2D array where index and coordinates could be calculated from each other. In this case you just need to understand index boundaries of tiles covered by your viewport. Instead of checking result of this function on every cell you will be able to just tell left and right X index and top and down Y index. Then just draw them in two nested loops like that:
for (int y = topY; y <= bottomY; ++y)
for (int x = leftX; x <= rightX; ++x)
// do drawing with tile[y][x];
Another approach would be to cache the previous results. If camera is not moving and tiles are not moving - then result of this function is not going to change. Just storing flag that indicates you whether each tile is visible could work here (but not a good practice in big game), update them every time camera moves or recalculate tile if it moves (if it is possible in your app). Still recalculation of all visibility flags on camera movement will be expensive, so try to use first optimization and reduce the task by finding what tile range is affected by camera at all

How to quickly scan and analyze large groups of pixels?

I am trying to build an autoclicker using C++ to beat a 2D videogame in which the following situation appears:
The main character is in the center of the screen, the background is completely black and enemies are coming from all directions. I want my program to be capable of clicking on enemies just as they appear on the screen.
What I came up at first is that the enemies have a minimum size of 15px, so I tried doing a search every 15 pixels and analyze if any pixel is different than the background's RGB, using GetPixel(). It looks something like this:
COLORREF color;
int R, G, B;
for(int i=0; i<SCREEN_SIZE_X; i+=15){ //These SCREEN_SIZE values are #defined with the ones of my screen
for(int j=0;j<SCREEN_SIZE_Y, j+=15){
//The following conditional excludes the center which is the player's position
if((i<PLAYER_MIN_EDGE_X or i>PLAYER_MAX_EDGE_X) and (j<PLAYER_MIN_EDGE_Y or j>PLAYER_MAX_EDGE_Y)){
color = GetPixel(GetDC(nullptr), i, j);
R = GetRValue(color);
G = GetGValue(color);
B = GetBValue(color);
if(R!=0 or G!=0 or B!=0) cout<<"Enemy Found"<<endl;
}
}
}
It turns out that, as expected, the GetPixel() function is extremely slow as it has to verify about 4000 pixels to cover just one screen scan. I was thinking about a way to solve this faster, and while looking at the keyboard I noticed the button "Pt Scr", and then realized that whatever that button is doing it is able to almost instantly save the information of millions of pixels.
I surely think there is a proper and different technic to approach this kind of problem.
What kind of theory or technic for pixel analyzing should I investigate and read about so that this can be considered respectable code, and to get it actually work, and much faster?
The GetPixel() routine is slow because it's fetching the data from the videocard (device) memory one by one. So to optimize your loop, you have to fetch the entire screen at once, and put it into an array of pixels. Then, you can iterate over that array of pixels much faster, because it'll be operating over the data in your RAM (host memory).
For a better optimization, I also recommend clearing the pixels of your player (in the center of the screen) after fetching the screen into your pixel array. This way, you can eliminate that if((i<PLAYER_MIN_EDGE_X or i>PLAYER_MAX_EDGE_X) and (j<PLAYER_MIN_EDGE_Y or j>PLAYER_MAX_EDGE_Y)) condition inside the loop.
CImage image;
//Save DC to image
int R, G, B;
BYTE *pRealData = (BYTE*)image.GetBits();
int pit = image.GetPitch();
int bitCount = image.GetBPP()/8;
int w=image.GetWidth();
int h=image.GetHeight();
for (int i=0;i<h;i++)
{
for (int j=0;j<w;j++)
{
B=*(pRealData + pit*i + j*bitCount);
G=*(pRealData + pit*i + j*bitCount +1);
R=*(pRealData + pit*i + j*bitCount +2);
}
}

Fast, good quality pixel interpolation for extreme image downscaling

In my program, I am downscaling an image of 500px or larger to an extreme level of approx 16px-32px. The source image is user-specified so I do not have control over its size. As you can imagine, few pixel interpolations hold up and inevitably the result is heavily aliased.
I've tried bilinear, bicubic and square average sampling. The square average sampling actually provides the most decent results but the smaller it gets, the larger the sampling radius has to be. As a result, it gets quite slow - slower than the other interpolation methods.
I have also tried an adaptive square average sampling so that the smaller it gets the greater the sampling radius, while the closer it is to its original size, the smaller the sampling radius. However, it produces problems and I am not convinced this is the best approach.
So the question is: What is the recommended type of pixel interpolation that is fast and works well on such extreme levels of downscaling?
I do not wish to use a library so I will need something that I can code by hand and isn't too complex. I am working in C++ with VS 2012.
Here's some example code I've tried as requested (hopefully without errors from my pseudo-code cut and paste). This performs a 7x7 average downscale and although it's a better result than bilinear or bicubic interpolation, it also takes quite a hit:
// Sizing control
ctl(0): "Resize",Range=(0,800),Val=100
// Variables
float fracx,fracy;
int Xnew,Ynew,p,q,Calc;
int x,y,p1,q1,i,j;
//New image dimensions
Xnew=image->width*ctl(0)/100;
Ynew=image->height*ctl(0)/100;
for (y=0; y<image->height; y++){ // rows
for (x=0; x<image->width; x++){ // columns
p1=(int)x*image->width/Xnew;
q1=(int)y*image->height/Ynew;
for (z=0; z<3; z++){ // channels
for (i=-3;i<=3;i++) {
for (j=-3;j<=3;j++) {
Calc += (int)(src(p1-i,q1-j,z));
} //j
} //i
Calc /= 49;
pset(x, y, z, Calc);
} // channels
} // columns
} // rows
Thanks!
The first point is to use pointers to your data. Never use indexes at every pixel. When you write: src(p1-i,q1-j,z) or pset(x, y, z, Calc) how much computation is being made? Use pointers to data and manipulate those.
Second: your algorithm is wrong. You don't want an average filter, but you want to make a grid on your source image and for every grid cell compute the average and put it in the corresponding pixel of the output image.
The specific solution should be tailored to your data representation, but it could be something like this:
std::vector<uint32_t> accum(Xnew);
std::vector<uint32_t> count(Xnew);
uint32_t *paccum, *pcount;
uint8_t* pin = /*pointer to input data*/;
uint8_t* pout = /*pointer to output data*/;
for (int dr = 0, sr = 0, w = image->width, h = image->height; sr < h; ++dr) {
memset(paccum = accum.data(), 0, Xnew*4);
memset(pcount = count.data(), 0, Xnew*4);
while (sr * Ynew / h == dr) {
paccum = accum.data();
pcount = count.data();
for (int dc = 0, sc = 0; sc < w; ++sc) {
*paccum += *i;
*pcount += 1;
++pin;
if (sc * Xnew / w > dc) {
++dc;
++paccum;
++pcount;
}
}
sr++;
}
std::transform(begin(accum), end(accum), begin(count), pout, std::divides<uint32_t>());
pout += Xnew;
}
This was written using my own library (still in development) and it seems to work, but later I changed the variables names in order to make it simpler here, so I don't guarantee anything!
The idea is to have a local buffer of 32 bit ints which can hold the partial sum of all pixels in the rows which fall in a row of the output image. Then you divide by the cell count and save the output to the final image.
The first thing you should do is to set up a performance evaluation system to measure how much any change impacts on the performance.
As said precedently, you should not use indexes but pointers for (probably) a substantial
speed up & not simply average as a basic averaging of pixels is basically a blur filter.
I would highly advise you to rework your code to be using "kernels". This is the matrix representing the ratio of each pixel used. That way, you will be able to test different strategies and optimize quality.
Example of kernels:
https://en.wikipedia.org/wiki/Kernel_(image_processing)
Upsampling/downsampling kernel:
http://www.johncostella.com/magic/
Note, from the code it seems you apply a 3x3 kernel but initially done on a 7x7 kernel. The equivalent 3x3 kernel as posted would be:
[1 1 1]
[1 1 1] * 1/9
[1 1 1]

Vertically flipping an Char array: is there a more efficient way?

Lets start with some code:
QByteArray OpenGLWidget::modifyImage(QByteArray imageArray, const int width, const int height){
if (vertFlip){
/* Each pixel constist of four unisgned chars: Red Green Blue Alpha.
* The field is normally 640*480, this means that the whole picture is in fact 640*4 uChars wide.
* The whole ByteArray is onedimensional, this means that 640*4 is the red of the first pixel of the second row
* This function is EXTREMELY SLOW
*/
QByteArray tempArray = imageArray;
for (int h = 0; h < height; ++h){
for (int w = 0; w < width/2; ++w){
for (int i = 0; i < 4; ++i){
imageArray.data()[h*width*4 + 4*w + i] = tempArray.data()[h*width*4 + (4*width - 4*w) + i ];
imageArray.data()[h*width*4 + (4*width - 4*w) + i] = tempArray.data()[h*width*4 + 4*w + i];
}
}
}
}
return imageArray;
}
This is the code I use right now to vertically flip an image which is 640*480 (The image is actually not guaranteed to be 640*480, but it mostly is). The color encoding is RGBA, which means that the total array size is 640*480*4. I get the images with 30 FPS, and I want to show them on the screen with the same FPS.
On an older CPU (Athlon x2) this code is just too much: the CPU is racing to keep up with the 30 FPS, so the question is: can I do this more efficient?
I am also working with OpenGL, does that have a gimmic I am not aware of that can flip images with relativly low CPU/GPU usage?
According to this question, you can flip an image in OpenGL by scaling it by (1,-1,1). This question explains how to do transformations and scaling.
You can improve at least by doing it blockwise, making use of the cache architecture. In your example one of the accesses (either the read OR the write) will be off-cache.
For a start it can help to "capture scanlines" if you're using two loops to loop through the pixels of an image, like so:
for (int y = 0; y < height; ++y)
{
// Capture scanline.
char* scanline = imageArray.data() + y*width*4;
for (int x = 0; x < width/2; ++x)
{
const int flipped_x = width - x-1;
for (int i = 0; i < 4; ++i)
swap(scanline[x*4 + i], scanline[flipped_x*4 + i]);
}
}
Another thing to note is that I used swap instead of a temporary image. That'll tend to be more efficient since you can just swap using registers instead of loading pixels from a copy of the entire image.
But also it generally helps if you use a 32-bit integer instead of working one byte at a time if you're going to be doing anything like this. If you're working with pixels with 8-bit types but know that each pixel is 32-bits, e.g., as in your case, you can generally get away with a case to uint32_t*, e.g.
for (int y = 0; y < height; ++y)
{
uint32_t* scanline = (uint32_t*)imageArray.data() + y*width;
std::reverse(scanline, scanline + width);
}
At this point you might parellelize the y loop. Flipping an image horizontally (it should be "horizontal" if I understood your original code correctly) in this way is a little bit tricky with the access patterns, but you should be able to get quite a decent boost using the above techniques.
I am also working with OpenGL, does that have a gimmic I am not aware
of that can flip images with relativly low CPU/GPU usage?
Naturally the fastest way to flip images is to not touch their pixels at all and just save the flipping for the final part of the pipeline when you render the result. For this you might render a texture in OGL with negative scaling instead of modifying the pixels of a texture.
Another thing that's really useful in video and image processing is to represent an image to process like this for all your image operations:
struct Image32
{
uint32_t* pixels;
int32_t width;
int32_t height;
int32_t x_stride;
int32_t y_stride;
};
The stride fields are what you use to get from one scanline (row) of an image to the next vertically and one column to the next horizontally. When you use this representation, you can use negative values for the stride and offset the pixels accordingly. You can also use the stride fields to, say, render only every other scanline of an image for fast interactive half-res scanline previews by using y_stride=height*2 and height/=2. You can quarter-res an image by setting x stride to 2 and y stride to 2*width and then halving the width and height. You can render a cropped image without making your blit functions accept a boatload of parameters by just modifying these fields and keeping the y stride to width to get from one row of the cropped section of the image to the next:
// Using the stride representation of Image32, this can now
// blit a cropped source, a horizontally flipped source,
// a vertically flipped source, a source flipped both ways,
// a half-res source, a quarter-res source, a quarter-res
// source that is horizontally flipped and cropped, etc,
// and all without modifying the source image in advance
// or having to accept all kinds of extra drawing parameters.
void blit(int dst_x, int dst_y, Image32 dst, Image32 src);
// We don't have to do things like this (and I think I lost
// some capabilities with this version below but it hurts my
// brain too much to think about what capabilities were lost):
void blit_gross(int dst_x, int dst_y, int dst_w, int dst_h, uint32_t* dst,
int src_x, int src_y, int src_w, int src_h,
const uint32_t* src, bool flip_x, bool flip_y);
By using negative values and passing it to an image operation (ex: a blit operation), the result will naturally be flipped without having to actually flip the image. It'll end up being "drawn flipped", so to speak, just as with the case of using OGL with a negative scaling transformation matrix.

Uneven Circles in Connect 4 Board

I'm in the process of creating a 2P Connect 4 game, but I can't seem to get the circular areas to place tokens spaced evenly.
Here's the code that initializes the positions of each circle:
POINT tilePos;
for (int i = 0; i < Board::Dims::MAXX; ++i)
{
tileXY.push_back (std::vector<POINT> (Board::Dims::MAXY)); //add column
for (int j = 0; j < Board::Dims::MAXY; ++j)
{
tilePos.x = boardPixelDims.left + (i + 1./2) * (boardPixelDims.width / Board::Dims::MAXX);
tilePos.y = boardPixelDims.top + (j + 1./2) * (boardPixelDims.height / Board::Dims::MAXY);
tileXY.at (i).push_back (tilePos); //add circle in column
}
}
I use a 2D vector of POINTs, tileXY, to store the positions. Recall the board is 7 circles wide by 6 circles high.
My logic is such that the first circle starts (for X) at:
left + width / #circles * 0 + width / #circles / 2
and increases by width / #circles each time, which is easy to picture for smaller numbers of circles.
Later, I draw the circles like this:
for (const std::vector<POINT> &col : _tileXY)
{
for (const POINT pos : col)
{
if (g.FillEllipse (&red, (int)(pos.x - CIRCLE_RADIUS), pos.y - CIRCLE_RADIUS, CIRCLE_RADIUS, CIRCLE_RADIUS) != Gdiplus::Status::Ok)
MessageBox (_windows.gameWindow, "FillEllipse failed.", 0, MB_SYSTEMMODAL);
}
}
Those loops iterate through each element of the vector and draws each circle in red (to stand out at the moment). The int conversion is to disambiguate the function call. The first two arguments after the brush are the top-left corner, and CIRCLE_RADIUS is 50.
The problem is that my board looks like this (sorry if it hurts your eyes a bit):
As you can see, the circles are too far up and left. They're also too small, but that's easily fixed. I tried changing some ints to doubles, but ultimately ended up with this being the closest I ever got to the real pattern. The expanded formula (expanding (i + 1./2)) for the positions looks the same as well.
Have I missed a small detail, or is my whole logic behind it off?
Edit:
As requested, types:
tilePos.x: POINT (the windows API one, type used is LONG)
boardPixelDims.*: double
Board::Dims::MAXX/MAXY: enum values (integral, contain 7 and 6 respectively)
Depending on whether CIRCLE_SIZE is intended as radius or diameter, two of your parameters seem to be wrong in the FillEllipse call. If it's a diameter, then you should be setting location to pos.x - CIRCLE_SIZE/2 and pos.y - CIRCLE_SIZE/2. If it's a radius, then the height and width paramters should each be 2*CIRCLE_SIZE rather than CIRCLE_SIZE.
Update - since you changed the variable name to CIRCLE_RADIUS, the latter solution is now obviously the correct one.
The easiest way I remember what arguments the shape related functions take is to always think in rectangles. FillEllipse will just draw an ellipse to fill the rectangle you give it. x, y, width and height.
A simple experiment to practice with is if you change your calls to FillRect, get everything positioned okay, and then change them to FillEllipse.