Abstract
My ultimate goal is to use Fltk to take user inputs of pixels, display a generated maze (either my own, or fetch it from the website mentioned in the details), and then show the animated solution.
This is what i've managed so far:
https://giant.gfycat.com/VioletWelloffHatchetfish.webm
Details
I'm in my first c++/algorithm class of a bachelors in CE.
As we've been learning about graphs, dijkstra etc. the last weeks i decided after watching Computerphile's video about Maze solving, to try to put the theory into "practice".
At first i wanted to output a maze from this site, http://hereandabove.com/maze/mazeorig.form.html, with the plotted solution. I chose that walls and paths should be 1x1 pixel, to make it easier to make into a 2D-vector, and then a graph.
This went well, and my program outputs a solved .png file, using dijkstra to find the shortest path.
I then wanted to put the entire solution in an animated gif.
This also works well. For each pixel it colors green/yellow, it passes an RGBA-vector to a gif-library, and in the end i end up with an animated step by step solution.
I also for each RGBA-vector passed to the gif-library, scale it up first, using this function:
//Both the buffer and resized buffer are member variables, and for each //plotted pixel in the path it updates 'buffer', and in this function makes a //larger version of it to 'resized_buffer'
// HEIGHT and WIDTH are the original size
// nHeight and nWidth are the new size.
bool Maze_IMG::resample(int nWidth, int nHeight)
{
if (buffer.size() == 0) return false;
resized_buffer.clear();
for (int i = 0; i < nWidth * nHeight * 4; i++) resized_buffer.push_back(-1);
double scaleWidth = (double)nWidth / (double)WIDTH;
double scaleHeight = (double)nHeight / (double)HEIGHT;
for (int cy = 0; cy < nHeight; cy++)
{
for (int cx = 0; cx < nWidth; cx++)
{
int pixel = (cy * (nWidth * 4)) + (cx * 4);
int nearestMatch = (((int)(cy / scaleHeight) * (WIDTH * 4)) + ((int)(cx / scaleWidth) * 4));
resized_buffer[pixel] = buffer[nearestMatch];
resized_buffer[pixel + 1] = buffer[nearestMatch + 1];
resized_buffer[pixel + 2] = buffer[nearestMatch + 2];
resized_buffer[pixel + 3] = buffer[nearestMatch + 3];
}
}
return true;
}
Problems
The problem is that it takes a looong time to do this while scaling them up, even with "small" mazes at 50x50 pixels, when trying to scale them to say 300x300. I've spent a lot of time to make code as efficient and fast as possible, but after i added the scaling, stuff that used to take 10 minutes, now takes hours.
In fltk i use the Fl_Anim_Gif-library to display animated gifs, but it wont load the maze gifs that has been scaled up (still troubleshooting this).
My real questions
Is it possible to improve the scaling function, so that it does not take forever? Or is this a totally wrong approach?
Is it a stupid idea to try to display it as a gif in fltk, would it be easier to just draw it directly in fltk, or should i rather try to display the images one after another i fltk?
I'm just familiarizing myself with fltk. Would it be easier now to use something like Qt instead. Would that be more beneficial in the long run as far as learning a GUI-library goes?
I'm mainly doing this for learning, and to start building some sort of portfolio for when i graduate. Is it beneficial at all to make a gui for this, or is this a waste of time?
Any thoughts or input would be greatly appreciated.
Whatever graphics package you use, the performance will be similar. It depends on how you handle the internals. For instance,
If you write it to a buffer and BLT it to the screen, it would be faster than writing to the screen directly.
If you only BLT on the paint event, it would be faster than forcing and update every time the screen data changes.
If you preallocate the buffers then the system does not have to keep on reallocating whenever the buffer space runs out.
Assuming that the space is preallocated, it can be written to without clearing first. Every cell it going to be written to so no need to clear, allocate and and reallocate.
Related
UPDATE:
I couldn't figure out the exact problem, however I made a fix that's good enough for me: Whenever the player's X value is less then half the screen's width, I just snap the view back to the center (up left corner) using sf::View::setCenter().
So I'm working on a recreating of Zelda II to help learn SFML good enough so I can make my own game based off of Zelda II. The issue is the screen scrolling, for some reason, if link walks away from the wall and initiated the camera to follow him, and then move back toward the wall, the camera won't go all the way back to the end of the wall, which occurs on the other wall at the end of the scene/room. This can be done multiple times to keep making the said camera block get further away from the wall. This happens on both sides of the scene, and I have reason to believe it has something to do with me trying to make the game frame independent, here's an included GIF of my issue to help understand:
My camera function:
void Game::camera() {
if (this->player.getVar('x') >= this->WIDTH / 2 and this->player.getVar('x') < this->sceneWidth - this->WIDTH / 2) {
this->view.move(int(this->player.getVar('v') * this->player.dt * this->player.dtM), 0);
}
}
player.getVar() is a temporary function I'm using to get the players x position and x velocity, using the argument 'x' returns the players x position, and 'v' returns the x velocity. WIDTH is equal to 256, and sceneWidth equals 767, which is the image I'm using for the background's width. dt and dtM are variables for the frame independence I mentioned earlier, this is the deceleration:
sf::Clock sclock;
float dt = 0;
float dtM = 60;
int frame = 0;
void updateTime() {
dt = sclock.restart().asSeconds();
frame += 1 * dt * dtM;
}
updateTime() is called every frame, so dt is updated every frame as well. frame is just a frame counter for Link's animations, and isn't relevant to the question. Everything that moves and is rendered on the screen is multiplied by dt and dtM respectively.
There's a clear mismatch between the movement of the player and the one of the camera... You don't show the code to move the player, but if I guess you don't cast to int the movement there, as you are doing on the view.move call. That wouldn't be a problem if you were setting the absolute position of the camera, but as you are constantly moving it, the little offset accumulates each frame, causing your problem.
One possible solution on is to skip the cast, which is unnecessary because sf::View::move accepts float as arguments.
void Game::camera() {
if (this->player.getVar('x') >= this->WIDTH / 2 and this->player.getVar('x') < this->sceneWidth - this->WIDTH / 2) {
this->view.move(this->player.getVar('v') * this->player.dt * this->player.dtM, 0);
}
}
Or even better, not to use view.move but to directly set the position of the camera each frame. Something like:
void Game::camera() {
if (this->player.getVar('x') >= this->WIDTH / 2 and this->player.getVar('x') < this->sceneWidth - this->WIDTH / 2) {
this->view.setCenter(this->player.getVar('x'), this->view.getCenter().y);
}
}
I am trying to build an autoclicker using C++ to beat a 2D videogame in which the following situation appears:
The main character is in the center of the screen, the background is completely black and enemies are coming from all directions. I want my program to be capable of clicking on enemies just as they appear on the screen.
What I came up at first is that the enemies have a minimum size of 15px, so I tried doing a search every 15 pixels and analyze if any pixel is different than the background's RGB, using GetPixel(). It looks something like this:
COLORREF color;
int R, G, B;
for(int i=0; i<SCREEN_SIZE_X; i+=15){ //These SCREEN_SIZE values are #defined with the ones of my screen
for(int j=0;j<SCREEN_SIZE_Y, j+=15){
//The following conditional excludes the center which is the player's position
if((i<PLAYER_MIN_EDGE_X or i>PLAYER_MAX_EDGE_X) and (j<PLAYER_MIN_EDGE_Y or j>PLAYER_MAX_EDGE_Y)){
color = GetPixel(GetDC(nullptr), i, j);
R = GetRValue(color);
G = GetGValue(color);
B = GetBValue(color);
if(R!=0 or G!=0 or B!=0) cout<<"Enemy Found"<<endl;
}
}
}
It turns out that, as expected, the GetPixel() function is extremely slow as it has to verify about 4000 pixels to cover just one screen scan. I was thinking about a way to solve this faster, and while looking at the keyboard I noticed the button "Pt Scr", and then realized that whatever that button is doing it is able to almost instantly save the information of millions of pixels.
I surely think there is a proper and different technic to approach this kind of problem.
What kind of theory or technic for pixel analyzing should I investigate and read about so that this can be considered respectable code, and to get it actually work, and much faster?
The GetPixel() routine is slow because it's fetching the data from the videocard (device) memory one by one. So to optimize your loop, you have to fetch the entire screen at once, and put it into an array of pixels. Then, you can iterate over that array of pixels much faster, because it'll be operating over the data in your RAM (host memory).
For a better optimization, I also recommend clearing the pixels of your player (in the center of the screen) after fetching the screen into your pixel array. This way, you can eliminate that if((i<PLAYER_MIN_EDGE_X or i>PLAYER_MAX_EDGE_X) and (j<PLAYER_MIN_EDGE_Y or j>PLAYER_MAX_EDGE_Y)) condition inside the loop.
CImage image;
//Save DC to image
int R, G, B;
BYTE *pRealData = (BYTE*)image.GetBits();
int pit = image.GetPitch();
int bitCount = image.GetBPP()/8;
int w=image.GetWidth();
int h=image.GetHeight();
for (int i=0;i<h;i++)
{
for (int j=0;j<w;j++)
{
B=*(pRealData + pit*i + j*bitCount);
G=*(pRealData + pit*i + j*bitCount +1);
R=*(pRealData + pit*i + j*bitCount +2);
}
}
Im using SDL to write a simulation that displays quite a big tilemap(around 240*240 tiles). Since im quite new to the SDL library I cant really tell if the pretty slow performance while rendering more than 50,000 tiles is actually normal. Every tile is visible at all times, being around 4*4px big. Currently its iterating every frame through a 2d array and rendering every single tile, which gives me about 40fps, too slow to actually put any game logic behind the system.
I tried to find some alternative systems, like only updating updated tiles but people always commented on how this is a bad practice and that the renderer is supposed to be cleaned every frame and so on.
Here a picture of the map
So I basically wanted to ask if there is any more performant system than rendering every single tile every frame.
Edit: So heres the simple rendering method im using
void World::DirtyBiomeDraw(Graphics *graphics) {
if(_biomeTexture == NULL) {
_biomeTexture = graphics->loadImage("assets/biome_sprites.png");
printf("Biome texture loaded.\n");
}
for(int i = 0; i < globals::WORLD_WIDTH; i++) {
for(int l = 0; l < globals::WORLD_HEIGHT; l++) {
SDL_Rect srect;
srect.h = globals::SPRITE_SIZE;
srect.w = globals::SPRITE_SIZE;
if(sites[l][i].biome > 0) {
srect.y = 0;
srect.x = (globals::SPRITE_SIZE * sites[l][i].biome) - globals::SPRITE_SIZE;
}
else {
srect.y = globals::SPRITE_SIZE;
srect.x = globals::SPRITE_SIZE * fabs(sites[l][i].biome);
}
SDL_Rect drect = {i * globals::SPRITE_SIZE * globals::SPRITE_SCALE, l * globals::SPRITE_SIZE * globals::SPRITE_SCALE,
globals::SPRITE_SIZE * globals::SPRITE_SCALE, globals::SPRITE_SIZE * globals::SPRITE_SCALE};
graphics->blitOnRenderer(_biomeTexture, &srect, &drect);
}
}
}
So in this context every tile is called "site", this is because they're also storing information like moisture, temperature and so on.
Every site got a biome assigned during the generation process, every biome is basically an ID, every land biome has an ID higher than 0 and every water id is 0 or lower.
This allows me to put every biome sprite ordered by ID into the "biome_sprites.png" image. All the land sprites are basically in the first row, while all the water tiles are in the second row. This way I dont have to manually assign a sprite to a biome and the method can do it itself by multiplying the tile size(basically the width) with the biome.
Heres the biome ID table from my SDD/GDD and the actual spritesheet.
The blitOnRenderer method from the graphics class basically just runs a SDL_RenderCopy blitting the texture onto the renderer.
void Graphics::blitOnRenderer(SDL_Texture *texture, SDL_Rect
*sourceRectangle, SDL_Rect *destinationRectangle) {
SDL_RenderCopy(this->_renderer, texture, sourceRectangle, destinationRectangle);
}
In the game loop every frame a RenderClear and RenderPresent gets called.
I really hope I explained it understandably, ask anything you want, im the one asking you guys for help so the least I can do is be cooperative :D
Poke the SDL2 devs for a multi-item version of SDL_RenderCopy() (similar to the existing SDL_RenderDrawLines()/SDL_RenderDrawPoints()/SDL_RenderDrawRects() functions) and/or batched SDL_Renderer backends.
Right now you're trying slam at least 240*240 = 57000 draw-calls down the GPU's throat; you can usually only count on 1000-4000 draw-calls in any given 16 milliseconds.
Alternatively switch to OpenGL & do the batching yourself.
I want to pixelate an image stored in a 1d array, although i am not sure how to do it, this is what i have comeup with so far...
the value of pixelation is currently 3 for testing purposes.
currently it just creates a section of randomly coloured pixels along the left third of the image, if i increase the value of pixelation the amount of random coloured pixels decreases and vice versa, so what am i doing wrong?
I have also already implemented the rotation, reading of the image and saving of a new image this is just a separate function which i need assistance with.
picture pixelate( const std::string& file_name, picture& tempImage, int& pixelation /* TODO: OTHER PARAMETERS HERE */)
{
picture pixelated = tempImage;
RGB tempPixel;
tempPixel.r = 0;
tempPixel.g = 0;
tempPixel.b = 0;
int counter = 0;
int numtimesrun = 0;
for (int x = 1; x<tempImage.width; x+=pixelation)
{
for (int y = 1; y<tempImage.height; y+=pixelation)
{
//RGB tempcol;
//tempcol for pixelate
for (int i = 1; i<pixelation; i++)
{
for (int j = 1; j<pixelation; j++)
{
tempPixel.r +=tempImage.pixel[counter+pixelation*numtimesrun].colour.r;
tempPixel.g +=tempImage.pixel[counter+pixelation*numtimesrun].colour.g;
tempPixel.b +=tempImage.pixel[counter+pixelation*numtimesrun].colour.b;
counter++;
//read colour
}
}
for (int k = 1; k<pixelation; k++)
{
for (int l = 1; l<pixelation; l++)
{
pixelated.pixel[numtimesrun].colour.r = tempPixel.r/pixelation;
pixelated.pixel[numtimesrun].colour.g = tempPixel.g/pixelation;
pixelated.pixel[numtimesrun].colour.b = tempPixel.b/pixelation;
//set colour
}
}
counter = 0;
numtimesrun++;
}
cout << x << endl;
}
cout << "Image successfully pixelated." << endl;
return pixelated;
}
I'm not too sure what you really want to do with your code, but I can see a few problems.
For one, you use for() loops with variables starting at 1. That's certainly wrong. Arrays in C/C++ start at 0.
The other main problem I can see is the pixelation parameter. You use it to increase x and y without knowing (at least in that function) whether it is a multiple of width and height. If not, you will definitively be missing pixels on the right edge and at the bottom (which edges will depend on the orientation, of course). Again, it very much depends on what you're trying to achieve.
Also the i and j loops start at the position defined by counter and numtimesrun which means that the last line you want to hit is not tempImage.width or tempImage.height. With that you are rather likely to have many overflows. Actually that would also explain the problems you see on the edges. (see update below)
Another potential problem, cannot tell for sure without seeing the structure declaration, but this sum using tempPixel.c += <value> may overflow. If the RGB components are defined as unsigned char (rather common) then you will definitively get overflows. So your average sum is broken if that's the fact. If that structure uses floats, then you're good.
Note also that your average is wrong. You are adding source data for pixelation x pixalation and your average is calculated as sum / pixelation. So you get a total which is pixalation times larger. You probably wanted sum / (pixelation * pixelation).
Your first loop with i and j computes a sum. The math is most certainly wrong. The counter + pixelation * numtimesrun expression will start reading at the second line, it seems. However, you are reading i * j values. That being said, it may be what you are trying to do (i.e. a moving average) in which case it could be optimized but I'll leave that out for now.
Update
If I understand what you are doing, a representation would be something like a filter. There is a picture of a 3x3:
.+. *
+*+ =>
.+.
What is on the left is what you are reading. This means the source needs to be at least 3x3. What I show on the right is the result. As we can see, the result needs to be 1x1. From what I see in your code you do not take that in account at all. (the varied characters represent varied weights, in your case all weights are 1.0).
You have two ways to handle that problem:
The resulting image has a size of width - pixelation * 2 + 1 by height - pixelation * 2 + 1; in this case you keep one result and do not care about the edges...
You rewrite the code to handle edges. This means you use less source data to compute the resulting edges. Another way is to compute the edge cases and save that in several output pixels (i.e. duplicate the pixels on the edges).
Update 2
Hmmm... looking at your code again, it seems that you compute the average of the 3x3 and save it in the 3x3:
.+. ***
+*+ => ***
.+. ***
Then the problem is different. The numtimesrun is wrong. In your k and l loops you save the pixels pixelation * pixelation in the SAME pixel and that advanced by one each time... so you are doing what I shown in my first update, but it looks like you were trying to do what is shown in my 2nd update.
The numtimesrun could be increased by pixelation each time:
numtimesrun += pixelation;
However, that's not enough to fix your k and l loops. There you probably need to calculate the correct destination. Maybe something like this (also requires a reset of the counter before the loop):
counter = 0;
... for loops ...
pixelated.pixel[counter+pixelation*numtimesrun].colour.r = ...;
... (take care of g and b)
++counter;
Yet again, I cannot tell for sure what you are trying to do, so I do not know why you'd want to copy the same pixel pixelation x pixelation times. But that explains why you get data only at the left (or top) of the image (very much depends on the orientation, one side for sure. And if that's 1/3rd then pixelation is probably 3.)
WARNING: if you implement the save properly, you'll experience crashes if you do not take care of the overflows mentioned earlier.
Update 3
As explained by Mark in the comment below, you have an array representing a 2d image. In that case, your counter variable is completely wrong since this is 100% linear whereas the 2d image is not. The 2nd line is width further away. At this point, you read the first 3 pixels at the top-left, then the next 3 pixels on the same, and finally the next 3 pixels still on the same line. Of course, it could be that your image is thus defined and these pixels are really one after another, although it is not very likely...
Mark's answer is concise and gives you the information necessary to access the correct pixels. However, you will still be hit by the overflow and possibly the fact that the width and height parameters are not a multiple of pixelation...
I don't do a lot of C++, but here's a pixelate function I wrote for Processing. It takes an argument of the width/height of the pixels you want to create.
void pixelateImage(int pxSize) {
// use ratio of height/width...
float ratio;
if (width < height) {
ratio = height/width;
}
else {
ratio = width/height;
}
// ... to set pixel height
int pxH = int(pxSize * ratio);
noStroke();
for (int x=0; x<width; x+=pxSize) {
for (int y=0; y<height; y+=pxH) {
fill(p.get(x, y));
rect(x, y, pxSize, pxH);
}
}
}
Without the built-in rect() function you'd have to write pixel-by-pixel using another two for loops:
for (int px=0; px<pxSize; px++) {
for (int py=0; py<pxH; py++) {
pixelated.pixel[py * tempImage.width + px].colour.r = tempPixel.r;
pixelated.pixel[py * tempImage.width + px].colour.g = tempPixel.g;
pixelated.pixel[py * tempImage.width + px].colour.b = tempPixel.b;
}
}
Generally when accessing an image stored in a 1D buffer, each row of the image will be stored as consecutive pixels and the next row will follow immediately after. The way to address into such a buffer is:
image[y*width+x]
For your purposes you want both inner loops to generate coordinates that go from the top and left of the pixelation square to the bottom right.
I'm coding small graphics editor and need some help.
I'm painting QImage like this:
void Editor::paintEvent(QPaintEvent *event)
{
QPainter painter(this);
// zoom is an int, representing zoomFactor from 1 to 12.
painter.drawImage(
QRect(0, 0, image.width() * zoom , image.height() * zoom),
image);
if (zoom >= 3 && showGrid) {
painter.setPen(palette().foreground().color());
painter.setPen(Qt::DotLine);
// this is how I draw grid
for (int i = 0; i <= image.width(); ++i)
painter.drawLine(zoom * i, 0,
zoom * i, zoom * image.height());
for (int j = 0; j <= image.height(); ++j)
painter.drawLine(0, zoom * j,
zoom * image.width(), zoom * j);
}
// (...)
}
It works fine with images like this (16 x 16)
Troubles begin when I open images like this (25 X 28)
As you can see pixels are drawn with different width and height!
What am I doing wrong? Please, help :)
UPD: Problem solved unexpectedly. I noticed, that Editor was QGLWidget, so I tried to change it to QWidget and everything worked just fine. Stupid me -_-
Btw, may be there's more convenient ways to zoom image?(like crop pixels, that are not needed to be painted)
The code for handling highly zoomed images has been "optimized" some time ago in Qt and now it is unfortunately buggy. I didn't check the code but my wild guess is that texture "speed" or "offset" used for drawing was before computed in floating point and it's now computed using fixed point.
I don't remember exactly with which version this was introduced, but it was quite early after 4.0. We have one of our applications that needs to allow to place a cross with sub-pixel precision and scaling over the point when zoom factor is high you can notice the picture is "wobbling".
I'm the first that would not claim a bug in somone else's code unless 100% sure, but this is one of those cases in which I am indeed 100% sure.
The only way out is to draw the zoomed image manually, either reimplementing texture mapping code or (if you only need int > 1 zoom factors) by drawing one pixel at a time with drawRect... it should be fast enough on a PC.
Note that the bug may be a common bug of video drivers instead of a bug in Qt... I've seen that the problem on our software is present on different platforms (Windows/Linux/OsX) and indeed IIRC only when using QWidget (and not when using QGLWidget).