Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want the view of OpenGL to be pixelated.
I have tried using multiple shaders from around the internet, which shows a pixelated image as their outcome. But they work on textures and not in 3D models (Their edges to be pixelated).
This is what I have tried
And this is a reference of what I am expecting:
Is there any way of accomplishing it with shaders, or with code?
EDIT:
I used glFramebuffer from Makogan's answer, and it seems to be pixelated.
But another problem is encountered. The Framebuffer seems to be copying itself, which eventually creates a mess.
This is how it looks
This code is what I used to create the Framebuffer:
FBO = glGenFramebuffers(1)
DBO = glGenRenderbuffers(1)
glBindRenderbuffer(GL_RENDERBUFFER, DBO)
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, 1280, 720)
glBindFramebuffer(GL_FRAMEBUFFER, FBO)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, DBO)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0)
And this piece of code is in the main loop:
# draw everything
glBindFramebuffer(GL_FRAMEBUFFER, 0)
glBlitFramebuffer(
640 - 256,
360 - 144,
640 + 256,
360 + 144,
0,
0,
640,
360,
GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT,
GL_NEAREST
)
If you want to pixelate your output, just downscale the dimensions of the framebuffer you are rendering to to very small dimensions (say 256 x 256) and turn off all anti aliasing.
You will have 3D pixelated rendering.
Related
I created a low-resolution framebuffer object which has a retro-style display.
The framebuffer seems to display itself, causing a mess of pixels at the bottom of the screen.
This is how it looks when framebuffer is drawn completely overlapping the viewport
This is how it looks when framebuffer is drawn in overlapping the quarter of the viewport
This is how I made the Framebuffer and the Renderbuffer
FBO = glGenFramebuffers(1)
DBO = glGenRenderbuffers(1)
glBindRenderbuffer(GL_RENDERBUFFER, DBO)
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, 1280, 720)
glBindFramebuffer(GL_FRAMEBUFFER, FBO)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, DBO)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0)
And this is the code in mainloop
glDrawElements(GL_TRIANGLES, len(indices), GL_UNSIGNED_INT, None) # Drawing Stuff
###
glBindFramebuffer(GL_FRAMEBUFFER, 0)
glBlitFramebuffer(
640 - 128,
360 - 72,
640 + 128,
360 + 72,
0,
0,
1280,
720,
GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT,
GL_NEAREST
)
I am using Python 3 with PyOpenGL
In the comments to this question, there are speculations about driver bugs. This is not the case. The one definitive source of correct OpenGL behavior is the OpenGL specification, and the current GL 4.6 spec states in section 18.3 "Copying Pixels" (emphasis mine):
Several commands copy pixel data between regions of the framebuffer
(see section 18.3.1), or between regions of textures and renderbuffers
(see section 18.3.2).
For all such commands, if the source and destination are identical or are different views of the same underlying texture image, and if
the source and destination regions overlap in that framebuffer,
renderbuffer, or texture image, pixel values resulting from the copy
operation are undefined.
Note that the binding target GL_FRAMEBUFFER is a shortcut for both GL_READ_FRAMEBUFFER *which defines the source for the blit) and GL_DRAW_FRAMEBUFFER (which specifies the destination), so you are creating the feedback loop on purpose here.
However, it remains totally unclear what you are doing. The blit from default framebuffer to default framebuffer means it will not show the contents of your FBO at all, and since your FBO idoes not have a color attachment, you can't render color data to it anayway.
I have got a small OpenGL app and I am looking for the optimal way of blitting several texture buffers at once.
Let's say I have got two framebuffers (fbo1, fbo2) that each contain two texture buffers. And I have got a target fbo (fbo3) with four texture buffers. And I want to blit all the textures from fbo1 and fbo2 to fbo3.
Currently I am doing it separately for each texture like,
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo1)
glReadBuffer(GL_COLOR_ATTACHMENT0)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo3)
glDrawBuffer(GL_COLOR_ATTACHMENT0)
glBlitFramebuffer(0, 0, width, height, 0, 0, ds_width, ds_height, GL_COLOR_BUFFER_BIT, GL_LINEAR)
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0)
How is it usually done? And is that even doable?
It isn't "usually" done because people generally don't go around copying a bunch of framebuffer images a lot. Indeed, if you are, that strongly suggests that you're probably doing something wrong.
The only way to do it is the way you've done here (though the needless rebinding of the framebuffers can go away): change the read/draw buffers each time and blit.
I want to apologize for the confusing title. I will explain more detailedly here.
I am learning framebuffer through this web side.
There we want to create a framebuffer for off-screen rendering and then render it back to the screen as one image. After trying out coding by myself and also copy from its source code, I found the rendered-back image on screen are rendered strangely.
With quite a lot rereading and observation I find the displayed image only captured 1/4 of the original one (the one rendered offscreen), which is the bottom-left part. I guess it's probably because of Mac's retina display. So I set this
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2 * SCR_WIDTH, 2 * SCR_HEIGHT, 0, GL_RGB,
GL_UNSIGNED_BYTE, NULL);
and also change the buffer storage settings to
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, 2 * SCR_WIDTH, 2 * SCR_HEIGHT);
width and height are originally set without doubling. I get the expected result. The window is created without doubling width and height.
Can anyone explain the theory behind it? Why do I need to double the width and height for off-screen rendering while the actual window is kept on the original width and height settings.
I am writing an interactive path tracer and I was wondering what is the best way to draw the result on screen in modern GL. I have the result of the rendering stored in a pixel buffer that is updated on each pass (+1 ssp). And I would like to draw it on screen after each pass. I did some searching and people have suggested drawing a textured quad for displaying 2d images. Does that mean I would create a new texture each time I update? And given that my pixels are updated very frequently, is this still a good idea?
You don't need to create an entirely new texture every time you want to update the content. If the size stays the same, you can reserve the storage once, using glTexImage2D() with NULL as the last argument. E.g. for a 512x512 RGBA texture with 8-bit component precision:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0,
GL_RGBA, GL_UNSIGNED_BYTE, NULL);
In OpenGL 4.2 and later, you can also use:
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, 512, 512);
You can then update all or parts of the texture with glTexSubImage2D(). For example, to update the whole texture following the example above:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 512, 512,
GL_RGBA, GL_UNSIGNED_BYTE, data);
Of course, if only rectangular part(s) of the texture change each time, you can make the updates more selective by choosing the 2nd to 5th parameter accordingly.
Once your current data is in a texture, you can either draw a textured screen size quad, or copy the texture to the default framebuffer using glBlitFramebuffer(). You should be able to find plenty of sample code for the first option. The code for the second option would look something like this:
// One time during setup.
GLuint readFboId = 0;
glGenFramebuffers(1, &readFboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, readFboId);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, tex, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
// Every time you want to copy the texture to the default framebuffer.
glBindFramebuffer(GL_READ_FRAMEBUFFER, readFboId);
glBlitFramebuffer(0, 0, texWidth, texHeight,
0, 0, winWidth, winHeight,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
I recently implemented a zoom-in/zoom-out function in my simple 2D engine and experienced some terrible seams between adjacent textures, as shown here:
http://oi59.tinypic.com/anmxyf.jpg
It doesn't look that bad but it was definitely annoying when it was constantly blinking at you when moving around.
I decided to change it so a very large portion of the game (as much as the player is allowed to zoom out) is instead drawn on a framebuffer, then I print the framebuffer and when zooming in or out it instead increases/decreases the framebuffer texture size, as to avoid the seams.
At first I decided to draw 5 times as much as is visible to the player at default zoom, so I made a framebuffer object with a texture 5 times as big, then draw to it.
Here is the initialization of the framebuffer object:
glGenFramebuffers(1, &main_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, main_framebuffer);
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
glGenTextures(1, &main_ColorBuffer);
glBindTexture(GL_TEXTURE_2D, main_ColorBuffer);
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA, SCREEN_BUFFER_WIDTH, SCREEN_BUFFER_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(
GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, main_ColorBuffer, 0
);
Where SCREEN_BUFFER_WIDTH and SCREEN_BUFFER_HEIGHT is set as five times as big as the default screen size. I then draw my world as I would normally do (where I could zoom out as much as I wanted and everything was fine, except the seams).
The issue is that only 1024 x 768 (being the default screen size) is drawn to the framebuffer. This is how I activate, draw to the framebuffer, then draw the framebuffer:
glBindFramebuffer(GL_FRAMEBUFFER, main_framebuffer);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
draw_blocks(); //draws all the blocks in the correct screen y and x position
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, main_ColorBuffer);
draw_quad(shift_y*ratio, shift_x*ratio, SCREEN_BUFFER_WIDTH*ratio, SCREEN_BUFFER_HEIGHT*ratio);
glBindTexture(GL_TEXTURE_2D, 0);
Where ratio is a float I use for zooming in and out and shift_y and shift_x I use to shift the framebuffer texture around to get a better feel for how things are going.
By zooming out and shifting the framebuffer a bit I get this:
http://oi61.tinypic.com/34zlpip.jpg
it only draws a small portion of the screen (which exactly fits the screen if I don't zoom out).
In contrast, this is what it looks like if I zoom all the way out (and then some) before using a framebuffer and instead drawing straight to the screen:
http://oi61.tinypic.com/21akbit.jpg (the empty parts are just chunks that haven't been loaded as the player isn't supposed to be able to zoom out this much).
I'm truly stumped here, I've tried changing the viewport before drawing but this does just about nothing.
I'd also like to note that I'm pretty confident the actual framebuffer texture object actually is five times as large as what's being drawn to it, because if I don't stretch it in my draw_quad function by instead giving it the same width and height as the screen, I get this:
http://oi62.tinypic.com/4triiu.jpg
The width and height of the framebuffer is now the same as the width and height of the screen, yet there's only graphics in a fraction of it, which is the small portion that's actually being drawn to.
Anyone have any clue? If more portions of code are needed I'm happy to oblige but the entire thing is far too much.