I am developing a application with OpenGL+GLFW and Linux as a target platform.
The default rasterizing has VERY strong aliasing. I have implemented FXAA on top of my pipeline and I still got pretty strong aliasing. Especially when there's some kind of animation or movement, the edges of meshes are flickering. This literally renders the whole project useless.
So, I thought I would also add a supersampling and I have been trying to implement it for two weeks already and still can't make it work. I start to think it's not possible with the combination PyOpenGL+GLFW+Ubuntu18.04.
So, the question is, can I do a supersampling by hand (without OpenGL extentions)? At the end of my (deferred) rendering pipeline I save all the data from different passes to the hard drive, so I thought I would do something like this:
Render the image with 2x/3x resolution to the texture.
Save the texturebuffer to the array.
Get the average pixel's value from each 2x2/3x3/4x4 block
of this array.
Save it to the hard drive.
Obviously, it's gonna be slower than mulstisampling with OpenGL extention and require more memory, but I don't need high fps and I have a pretty small resolution (like 480x640 or similar) so it might work out.
Do you guys have any thoughts about it? I would be glad to any advice.
I am writing an application that must display images, and potentially loads of them. So I was wondering if there is a proper way of having a QGraphicsScene use OpenGL, and in case it fails, use a software renderer.
I've read the documentation, but what if setting the viewport fails?
You're talking about more than a gigabyte of textures. OpenGL by itself is of no help here, neither is a raw QGraphicsScene. You'll need to dynamically cache the images, ideally with prediction based on scrolling direction and speed. You'll need a coupling layer between each view and the scene, to keep the scene populated with images that are visible or will be soon visible in each view. Once you do that, OpenGL may help but you absolutely need to profile things and prove to yourself that it helps. Even without OpenGL you can have very decent performance.
I realize this is probably a ridiculous question, but before trying to figure out what libraries to use for which projects, I think it makes sense to really understand the purpose of such libraries first.
A lot of video games use libraries like OpenGL. All the tutorials I've seen of such libraries demonstrate how to write code that tells the computer to draw something. Thing is, in games these days everything is modeled using software such as Zbrush, Maya, or 3ds Max. The models are textured and are good to go. It seems like all you'd need to do is write an animation loop that draws the models and updates repeatedly rather than actually program the code to draw every little thing. That would be both extremely time consuming and would make the models useless. So where does OpenGL or Direct 3D come in in relation to video games and 3d art? What is so crucial about them when all the graphics are already created and just need to be loaded and drawn? Are they used mainly for shaders and effects?
This question may just prove how new I am to this, but it's one I've never heard asked. I'm just starting to learn programming and I'm understanding the code and logic fairly well, but I don't understand graphics libraries or certain frameworks at all and tutorials are not helping.
It seems like all you'd need to do is write an animation loop that draws the models and updates repeatedly rather than actually program the code to draw every little thing.
Everything that happens in a computer does so because a program of some form tells it exactly what to do. The letters that this message is composed of only appear because your web-browser of choice downloaded this file via TCP/IP over an HTTP protocol, decoded its UTF-8-encoded text, interpreted that text as defined by the XML, HTML, JavaScript, and so forth standards, and then displayed the visible portion as defined by the Unicode standard for text layout and in accord with HTML et al, using the displaying and windowing abilities of your OS or window manager or whatever.
Every single part of that operation, from the downloading of the file to its display, is governed by a piece of code. Every pixel you are looking at on the screen is where it is because some code put it there.
HTML alone doesn't mean anything. You cannot just take an HTML file and blast it to the screen. Some code must interpret it. You can interpret HTML as a text file, but if you do, it loses all formatting, and you get to see all of the tags. A web browsers can interpret it as proper HTML, in which case you get to see the formatting. But in every case, the meaning of the HTML file is determined by how it is used.
The "draws the model" part of your proposed algorithm must be done by someone. If you don't write that code, then you must be using a library or some other system that will cause the model to appear. And what does that library do? How does it cause the model to appear?
A model, like an HTML web page, is meaningless by itself. Or to put it another way, your algorithm can be boiled down to this:
Animate the model.
????
Profit!
You're missing a key component: how to actually interpret the model and cause it to appear on the screen. OpenGL/D3D/a software rasterizer/etc is vital for that task.
A lot of video games use libraries like OpenGL.
First and foremost: OpenGL is not a library per-se, but an API (specification). The OpenGL API may be implemented in form as a software library, but these days is much more common to implement OpenGL in form of a driver that turns OpenGL function calls into control commands to a graphics processor sitting on a graphics card (GPU).
All the tutorials I've seen of such libraries demonstrate how to write code that tells the computer to draw something.
Yes. This is because things need to be drawn to make any use of them.
Thing is, in games these days everything is modeled using software such as Zbrush, Maya, or 3ds Max.
At this point the models just consist of a large list of numbers, and further numbers that tell, how the other numbers form some sort of geometry. Those numbers are not some sort of ready to use image.
The models are textured and are good to go.
They are a bunch of numbers, and what they have is some additional numbers controlling texturing. The textures themself are in turn just numbers.
It seems like all you'd need to do is write an animation loop that draws the models
And how do you think this drawing is going to happen? There's no magic "here you have a model, display it" function. Because for one the way in which the numbers making up a model may have any kind of meaning. So some program must give meaning to those numbers. And that is a renderer.
and updates repeatedly rather than actually program the code to draw every little thing.
Again, there is no magic "draw it" function. Drawing a model involves going through each of its numbers, it consists of, and turning those into drawing commands to the GPU.
That would be both extremely time consuming and would make the models useless.
How are the models useless, when they are what is controlling the issuing of commands to OpenGL. Or do you think OpenGL is used to actually "create" models?
So where does OpenGL or Direct 3D come in in relation to video games and 3d art?
It is used to turn the numbers a 3D model, as it is saved away from a modeller, into something pleasant to look at.
What is so crucial about them when all the graphics are already created
The graphics is not yet created, when the model is done. What's created is a model, and some auxilliary data in form of textures and shaders, which are then turned into graphics in realtime, at the execution time of the program.
and just need to be loaded and drawn?
Again, after being loaded, a model is just a bunch of numbers. And drawing means, turning those numbers into something to look at, which requires sending drawing commands to the graphics processor (GPU), which happens using a API like OpenGL or Direct3D
Are they used mainly for shaders and effects?
They are used to turn the numbers generated by a 3D modelling program (Blender, Maya, ZBrush) into an actual picture.
You have data. Like a model, with vertices, normals, and textures. As #datenwolf stated above, those are all just numbers sitting on the hard drive or in RAM, not colors on the screen.
Your CPU (which is where the program you write runs) can't talk to the screen directly. Instead, you send the data you want to draw to the GPU. The GPU then draws the data. Graphics APIs like OpenGL and Direct3D allow programs running on the CPU to send data to the GPU and customize how the GPU draws it. This is a gross simplification, but it sounds like you just need an overview.
Ultimately, every graphics program must go through a graphics API. When you draw an image, for example, you send the GPU the image, and the GPU draws it on the screen. Draw some text? Send the data to the GPU. The GPU draws it. Remember, your code can't talk to the screen. It CAN talk to the GPU through OpenGL or Direct3D, and the GPU then draws the data.
Before OpenGL and DirectX, the games had to use special instructions depending on what graphics card you had. When you bought a new game, you had to check carefully if your card was supported, or you couldn't use the game.
OpenGL and DirectX is a standardized API to the grapics cards. A library is delivered by the manufacturer of the card. If they follow the specification, you are guaranteed that games will work (if they also follow the same specification).
Open Graphics Library (OpenGL) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.
I understand that you usually create complex 3D models in Blender or some other 3D modelling software and export it afterwords as .obj. This .obj file gets parsed into your program and openGL will render it. This as far as I understand real-time rendering.
Now I was wondering if there is something like pre-rendered objects. I'm a little bit confused because there are so many articles/videos about real-time rendering but I haven't found any information about none real-time rendering. Does something like this exists or not? The only thing which would come into my mind as none real-time rendering would be a video.
I guess this is pretty much a yes or no question :) but if it exists maybe someone could point me to some websites with explanations.
"Real-time rendering" means that the frames are being generated as fast as they can be displayed. "Non-real-time rendering", or "offline rendering" means generating frames one at a time, taking as much time as necessary to achieve the desired image quality, and then later assembling them into a movie. Video at the quality of video games can be rendered in real time; something as elaborate as a Pixar movie, though, has to be done in offline mode. Individual frames can still take hours of rendering time!
It's not entirely clear what you mean by "prerendered objects", however there are things called VBOs and Vertex Arrays that store the object's geometry in VRAM so as to not have to load it into the rendering pipeline using glVertex3f() or similar every frame. This is called Immediate Mode.
VBOs and Vertex arrays are used instead of immediate mode because they're far faster than calling the graphics driver to load data into VRAM for every vertex because they are kept in VRAM, which is faster than normal RAM, ready to be booted into the render pipeline.
The page here may help, too.
There's nothing stopping you from rendering to an off-screen frame-buffer (i.e., an FBO) and then saving that to disk rather than displaying it to the screen. For instance, that's how GPGPU techniques used to work before the advent of CUDA, OpenCL, etc. ... You would load your data as an unfiltered floating point texture, perform your calculation using pixel-shaders on the FBO, and then save the results back to disk.
In the link I posted above, it states in the overview:
This extension defines a simple interface for drawing to rendering
destinations other than the buffers provided to the GL by the
window-system.
It then goes on to state,
By allowing the use of a framebuffer-attachable image as a rendering
destination, this extension enables a form of "offscreen" rendering.
So, you would get your "non-real-time" rendering by rendering off-screen some scene that renders slower than 30fps, and then saving those results to some movie file or file-sequence format that can be played back at a later date.
Here is a possibly unanswerable question...
How do I create a website capable of displaying 3D images on a 3D capable display/monitor without using plugins?
Ignore the issues of bandwidth as they are not an issue. I also wish to avoid the red/green effect (anaglyph) as they have many problems. I figure that I could simply display an 120Hz video but then how do I sync the left and right image up with the screen's timing?
Any help would be appreciated however 'impossible' is never an answer.
Thanks
One solution could be to mimic the red/green 3d effect. You'd pass the left and right eye images through a filter and then display them on top of each other, though I'm not sure how off the top of my head. If you could make the views transparent that might work.
You wouldn't need to display anything at 120 Hz or have any synchronisation or plugins.
Google Streetview uses this 3d mode.
There is no way a browser has access to the graphic adapter specific driver libraries. hence it is not possible to make a website even with a plugin. Not to mention that most graphic adapters cant handle windowed 3D frames except proffesional Quadro cards. Every other 3D capable card has to run under certain resolutions and full screen.
First of all, I'm really not sure that there's a good way to sync output with the screen refresh. This is because anything running in a browser is subject to the browser's compositing and rendering.
You may want to look into WebGL--it's essentially a subset of OpenGL intended to provide hardware accelerated graphics in the browser. It's also supported by all of the beta or upcoming versions of the major browsers. Unfortunately, without any syncing mechanism, I don't know of any way to support the polarization method of 3D.