Although I have been writen some code of OpenGL and OpenGL ES and I know that OpenGL ES is a subset of OpenGL and it is designed for Embedded system.
Now there's a fact that rk3399 supports OpenGL ES but not supports OpenGL.
I use OpenGL and OpenGL ES to achieve the code of getting image from camera and display it separaly.
However OpenGL ES version's frame rate is about 20fps while OpenGL version's frame rate only is about 5fps.
My assumption is that OpenGL ES's implementation is totally different from OpenGL, so rk3399 have OpenGL ES driver but not have OpenGL driver.
Is my assumption right? If not, how could I explain the performance gap bewteen above two code?
Thanks in advance.
Related
I read here and there that OpenGL ES does not support geometry shaders. I just wanted to know if it will be added in the future and if not I'd be interested in why.
Is it a technical limitation of some sort?
Just found out that OpenGL ES 3.1 can apparently use geometry shaders
My setup includes on-board Intel integrated GPU for everyday tasks and a high-performance Nvidia GPU for graphics-intensive applications. I'm developing an OpenGL 3.3 (core profile) application (using shaders, not fixed-function-pipeline). By default, my app runs on Intel GPU and works just fine. But should I try to run it on Nvidia, it only shows the black screen.
Now here's the interesting part. OpenGL context gets loaded correctly, and world coordinate axes I draw for debugging actually get drawn (GL_LINE). For some reason, Nvidia doesn't draw any GL_POLYGONs or GL_QUADs.
Has anyone experienced something similar, and what do you think is the culprit here?
It appears GL_POLYGON, GL_QUADS and GL_QUAD_STRIP are removed from OpenGL 3.3 core profile. For some reason Intel draws them regardless, but Nvidia started drawing as well, as soon as I substituted those with GL_TRIANGLES etc. Always check for removed features if problems like this arise.
When i look to the 4th Edition of the Book "OpenGL SuperBible" it starts with Points / Lines drawing Polygons and later on Shaders are discussed. In the 6th Edition of the book, it starts directly with Shaders as the very first example. I didn't use OpenGL for a long time, but is it the way to start with Shaders?
Why is there the shift, is this because of going from fixed pipeline to Shaders?
To a limited extent it depends on exactly which branch of OpenGL you're talking about. OpenGL ES 2.0 has no path to the screen other than shaders — there's no matrix stack, no option to draw without shaders and none of the fixed-pipeline bonded built-in variables. WebGL is based on OpenGL ES 2.0 so it inherits all of that behaviour.
As per derhass' comment, all of the fixed stuff is deprecated in modern desktop GL and you can expect it to vanish over time. The quickest thing to check is probably the OpenGL 4.4 quick reference card. If the functionality you want isn't on there, it's not in the latest OpenGL.
As per your comment, Kronos defines OpenGL to be:
the only cross-platform graphics API that enables developers of
software for PC, workstation, and supercomputing hardware to create
high- performance, visually-compelling graphics software applications,
in markets such as CAD, content creation, energy, entertainment, game
development, manufacturing, medical, and virtual reality.
It more or less just exposes the hardware. The hardware can't do anything without shaders. Nobody in the industry wants to be maintaining shaders that emulate the old fixed functionality forever.
"About the only rendering you can do with OpenGL without shaders is clearing a window, which should give you a feel for how important they are when using OpenGL." - From OpenGL official guide
After OpenGL 3.1, the fixed-function pipeline was removed and shaders became mandatory.
So the SuperBible or the OpenGL Redbook begin by describing the new Programmable Pipeline early in discussions. Then they tell how to write and use a vertex and fragment shading program.
For your shader objects you now have to:
Create the shader (glCreateShader, glShaderSource)
Compile the shader source into an object (glCompileShader)
Verify the shader (glGetShaderInfoLog)
Then you link the shader object into your shader program:
Create shader program (glCreateProgram)
Attach the shader objects (glAttachShader)
Link the shader program (glLinkProgram)
Verify (glGetProgramInfoLog)
Use the shader (glUseProgram)
There is more to do now before you can render than in the previous fixed function pipeline. No doubt the programmable pipeline is more powerful, but it does make it more difficult just to begin rendering. And the shaders are now a core concept to learn.
Using OpenGL 4.4 and OpenCL 2.0, lets say i just want to modify specific pixels of a texture per frame.
Which is the optimal way to achieve this?
Which object should i share?
Will i be able to modify only limited number of pixels?
I want GPU only operations.
First off, there are no OpenCL 2.0 drivers yet; the specification only recently got finalized and implementations probably won't happen until 2014.
Likewise, many OpenGL implementations aren't at 4.4 yet.
However, you can still do what you want with OpenCL 1.2 (or 1.1 since NVIDIA is behind the industry in OpenCL support) and current OpenGL implementations.
Look for OpenCL / OpenGL interop examples, but basically:
Create OpenCL context from OpenGL context
Create OpenCL image from OpenGL texture
After rendering your OpenGL into the texture, acquire the image for OpenCL, run an OpenCL kernel that only update the specific pixel you want to update, and release it back to OpenGL
Draw the texture to the screen
Often OpenCL kernels are 2D and address each pixel, but you can run a 1D kernel where each work item updates a single pixel based on some algorithm. Just make sure not to write the same pixel from more than one work item or you'll have a race condition.
I've been trying to work with OpenGL ES 2 in Android for some time now, but I'm finding the lack of experience with OpenGL itself to be an issue, since I barely understand what all the GLES20 methods actually do. I've decided to try to learn actual OpenGL, but a little bit of reading has informed me that each version of OpenGL is drastically different from its predecessor. Wikipedia isn't very clear on which version that OpenGL ES2 most closely resembles.
So, my question is, which version of OpenGL should I learn for the purpose of better understanding OpenGL ES2?
According to the book OpenGL ES 2.0 Programming Guide:
The OpenGL ES 1.0 and 1.1 specifications implement a fixed function
pipeline and are derived from the OpenGL 1.3 and 1.5 specifications,
respectively. The OpenGL ES 2.0 specification implements a
programmable graphics pipeline and is derived from the OpenGL 2.0
specification.
OpenGL ES 2.0’s closest relative is OpenGL 2.0. Khronos provides a difference specification, which enumerates what desktop OpenGL functionality was removed to create OpenGL 2.0. The shading language for OpenGL ES 2.0 (GLSL ES 1.0) is derived from GLSL 1.20.
OpenGL ES2.0 is almost one-to-one copy of WebGL.
The differences are practically only in the setup of the environment, which in Android happens with EGL and which happens in WebGL with calls to DOM methods. (setting canvas)
The comparison to "open gl" is next to impossible, as Open GL means almost fixed and hidden rendering pipeline, which is controlled by stack of matrices and attributes. This is now obsoleted in ES. Instead one has the "opportunity" to control almost every aspect of the rendering pipeline.