Is material just the images applying to geomtric object - opengl

From graphics view, is material just the the images applying to geometric object?

We can define material as a set of data which describes how a surface reacts to light.
In the Phong-Blinn shading model, the material is defined by several sets of data:
(rgb) ambient - (see below)
(rgb) diffuse - how strongly it diffuses the incoming light of a given color
(rgb) specular - how well it reflects the incoming light of a given color
(number) shininess - how perfect (how small and focused) is the reflection. Bigger value = smaller "shining spot".
The ambient value is just added to the final color - it is there to emulate "secondary light reflections". It's usually set to the same hue as diffuse, but usually of smaller intensity.
By balancing ambient/diffuse/specular/shininess parameters, you may make the surface resemble different real-world materials.
Also, those parameters may be defined either per-vertex or per-pixel (as a texture). It is common to take the values of ambient and diffuse from a colourful texture and specify constant specular and shininess, but you also could have 3 different textures for ambient, diffuse and specular colours - in order to simulate a sophisticated material which reflects the light in different way depending on the position.
There might be more parameters involved depending on what effects you want to use, for example an additional value for glowing surfaces, etc.

Material usually refers to the colour of the geometric object, while the image is the texture. The material will specify how the object reacts to ambient and direct lighting, it's reflectance, transparency etc.
They can be combined in various ways to produce different effects.
For example the texture might completely override the material so that the underlying colour has no effect on the final scene.
In other cases the texture might be blended with the material so that the same texture effect can be applied to different objects (red car, blue car etc.).

Related

Generating a proper alpha (transparency) value out of a PBR material

I'm writing a PBR shader and I'm trying to determine the correct way to generate the alpha value that is output for the image. The goal is to have a final image that is 'premultiplied alpha', so it can be used with another Over composite operation later on (say, compositing it over another background image). This all works great, except for the case of specular highlights on transparent surfaces.
Reading this article:
https://google.github.io/filament/Filament.md.html#lighting/transparencyandtranslucencylighting
In particular:
Observe a window and you will see that the diffuse reflectance is
transparent. On the other hand, the brighter the specular reflectance,
the less opaque the window appears. This effect can be seen in figure
63: the scene is properly reflected onto the glass surfaces but the
specular highlight of the sun is bright enough to appear opaque.
They also include the code snippet:
// baseColor has already been premultiplied
vec4 shadeSurface(vec4 baseColor) {
float alpha = baseColor.a;
vec3 diffuseColor = evaluateDiffuseLighting();
vec3 specularColor = evaluateSpecularLighting();
return vec4(diffuseColor + specularColor, alpha);
}
When rendering glass the metallic level would be 0, so it's not pulling from the baseColor at all, and just using the specular level. This all makes sense and means specular highlights can still occur even on the transparent glass, as per the above quote. I don't want to just multiply by alpha at the end, since my baseColor texture would already be a premultiplied alpha image, such as something containing a decal. For example let's say I'm drawing a pane of glass with a sticker on it that's the letter 'A'.
My question is: for a pixel that has a specular highlight on a transparent portion of the glass, what should the alpha value be for compositing to work downstream? The alpha would be 0 according to the above code, which would make an Over operation later on behave poorly. I say this since a property of pre-multiplied alpha is that max(R,G,B) <= A, but this specular highlight would have max(R,G,B) > A. This would result in the over operation being additive between the specular highlight and the background it's being composited over, blowing out the final image.
Should I set the alpha to max(specular, alpha), to give a more useful alpha value for these pixels? Is there a more 'correct' way to handle this?
Thanks
The idea behind the pre-multiplied approach is we assume that the alpha/opacity only affects the diffuse (i.e. the light that goes inside the material/surface), whereas the specular is assumed to be unaffected (at least for the case of a dielectric, the light is simply reflected off the surface un-tinted).
In essence, you should have :
return vec4(diffuseColor * alpha + specularColor, alpha);
Where the alpha is 'pre-multiplied' with the diffuse colour, while the specular colour is left at full intensity (as it should).
Your blend mode will also need to follow suit : off the top of my head one/one for the source/destination colours since the colour will need to be rendered in an additive fashion.

Can you mix omnidirectional lights with a shadowmap?

I have implemented PSSM (Parallel-Split Shadow Map) for my RPG. It uses only the "sun" (one directional light high above.
So my question is, is there a special technique to add say max 4 omni-directional lights to the pixel shader?
It would work somewhat along these lines :
At the shadowmap application (or maybe at its creation):
if in light: do as usual
else: check if any light is close enough to light this pixel (and if, don't shadow it).
Maybe this can even be done in the shadowmap generation (so filtering will be applied to those omni lights too)
Any hints or tips warmly welcomed!
The answer to this question is so obvious that the question itself suggests that you've gone too far into the hacks of 3D graphics and need to remember what all of this is actually supposed to be doing.
A shadowmap is a way to tell whether a particular location on a surface is in shadow relative to a particular light in the scene. The shadowmap answers the question, "Is there something solid between the point on the surface and the light source?"
If the answer to this question is "yes", then that light source does not contribute to the lighting computations for that point. If the answer is "no", then it does.
The color of a point on a surface is based on the incoming light from all light sources and the surface characteristics of that point on the surface (diffuse color, specular shininess, normal, etc). All of the computations from each light add into one another to produce the final color value that the point represents.
You generally also have various hacks. The ambient term is often used to represent lots of indirect illumination. Light maps can take the place of light from other sources that you're not computing dynamically in the shader. And so on. But in the end, they are all just lights.
Your lighting equation takes the light parameters (color, direction/position, or just the ambient intensity for the ambient light) and the surface characteristics (as stated above), and produces the quantity of light reflected from the surface. Since the reflectance from one light is not affected by the reflectance from other lights, you can compute this independently for each light.
All of these values are added together to produce the final value.
The fact that you can short-circuit the computation of one of these via a shadowmap test is irrelevant to the overall scheme. Just add the various lighting terms to one another, and that's your answer. There is no "special technique" to doing this; you just do it.

what's the difference between material and texture?

In Computer graphics, what's the difference between material and texture?
In OpenGL, a material is a set of coefficients that define how the lighting model interacts with the surface. In particular, ambient, diffuse, and specular coefficients for each color component (R,G,B) are defined and applied to a surface and effectively multiplied by the amount of light of each kind/color that strikes the surface. A final emmisivity coefficient is then added to each color component that allows objects to appear luminous without actually interacting with other objects.
A texture, on the other hand, is a set of 1-, 2-, 3-, or 4- dimensional bitmap (image) data that is applied and interpolated on to a surface according to texture coordinates at the vertices. Texture data alters the color of the surface whether or not lighting is enabled (and depending on the texture mode, e.g. decal, modulate, etc.). Textures are used frequently to provide sub-polygon level detail to a surface, e.g. applying a repeating brick and mortar texture to a quad to simulate a brick wall, rather than modeling the geometry of each individual brick.
In the classical (fixed-pipeline) OpenGL model, textures and materials are somewhat orthogonal. In the new programmable shader world, the line has blurred quite a bit. Frequently textures are used to influence lighting in other ways. For example, bump maps are textures that are used to perturb surface normals to effect lighting, rather than modifying pixel color directly as a regular "image" texture would.
The question suggests a common misunderstanding of various computer graphics concepts. It is one born of pre-shader thinking and coding.
A texture is nothing more than a container for a series of one or more images, where an image is an array of some dimensionality (1D, 2D, etc) of values, where each value can be a vector of between 1 and 4 numbers. Textures also have some special techniques for accessing values from them that allow for interpolation and the minimizing of aliasing artifacts from sampling.
A texture can contain colors, but textures do not have to contain colors. Textures can be used to vary parameters across an objects surface, but that is not all textures can be used for.
Textures have no direct association with "materials"; you can use them for a wide variety of things (or nothing at all).
A material is a concept in lighting. Given a particular light and a point on the surface, the intensity (ie: color) of light reflected from that surface at that point is governed by a lighting equation. This equation is a function of many parameters. But those parameters are grouped into two categories.
One category of light equation parameters are the light parameters. These describe the characteristics of the light source. Point lights vs. directional lights vs. spot lights. The light intensity (again: color) is another parameter. Since the intensity itself may vary depending on the direction of the surface point relative to the light (think flashlights or spotlights), the intensity may be accessed from a texture. That's how many games project flashlights over a dark room.
The other category of light equation parameters describes the characteristics of the surface at that point. These are called material parameters. The material parameters, or material for short, describe important properties of the surface at the point in question. The normal at that point is an important one. There is also the diffuse reflectance (color), specular reflectance, specular shininess (exponent for Phong/Blinn-Phong) and various other parameters, depending on how comprehensive your lighting equation is.
Where do these values come from? Light parameters tend to be fixed in the world. Lights don't move per-object (though if you're doing lighting in object space, then each object would have its own light position). The light intensity may vary. But that's mostly it; anything else happens between frames, not within a single frame's rendering. So most light parameters are shader uniforms.
Material parameters can come from a variety of sources. Using a shader uniform effectively means that all points on the surface have that same value. So your could have the diffuse color come from a uniform, which would give the surface a uniform color (modified by lighting, of course). You can vary material parameters per-vertex, by passing them as vertex attributes or computing them from other attributes.
Or you can provide a parameter by mapping a texture to a surface. This mapping involves associating texture coordinates with vertex positions, so that the texture is directly attached to the surface. The texture is sampled at that location during rendering, and that value is used to perform the lighting.
The most common textures you're familiar with, "color textures", are simply varying the diffuse reflectance of the surface. The texture provides the diffuse color at each point along the surface.
This is not the only function of textures. You could just as easily vary the specular shininess over the surface. Or the light intensity. Or anything else.
Textures are tools. Materials are just a group of light equation parameters.
What I think of with those terms:
A texture is an image that is mapped onto a 3D object.
A material simulates a physical material. Take "glass" for example. You couldn't produce a glass effect with a plain texture map. It has parameters like how it reflects and refracts light at different angles. A material could also be a simple texture map so sometimes the terms mean the same thing.
Although the terms can be used interchangeably, it's common to refer to a bitmap as a texture.
While a fully defined texture, with lighting properties, bump mapping etc, would more usually be referred to as a material.
But I should stress that depending on the tools being used, their terminology will be used by the related community.

Basic OpenGL lighting question

I think this is an extremely stupid and newbie question, but then I am a newbie in graphics and openGL. Having drawn a sphere and put a light source nearby, also having specified ambient light, I started experimenting with light and material values and came to a surprising conclusion: the colors which we specify with glColor* do not matter at all when lighting is enabled. Instead, the equivalent is the material's ambient component. Is this conclusion correct? Thanks
If the lighting is enabled, then instead of the vertex color, the material color (well, colors - there are several of them for different types of response to light) is used. Material colors are specified by glMaterial* functions.
If you want to reuse your code, you can use glEnable(GL_COLOR_MATERIAL) and glColorMaterial(GL_AMBIENT_AND_DIFFUSE) to have your old glColor* calls mapped to material color automatically.
(And please switch to shaders as fast as possible - the shader approach is both easier and more powerful)
I suppose you don't use fragment shader yet. From glprogramming.com:
vertex color =
the material emission at that vertex +
the global ambient light scaled by the materials ambient
property at that vertex +
the ambient, diffuse, and specular contributions from all the
light sources, properly attenuated
So yes, vertex color is not used.
Edit: You can also look for GL lightning equation in GL specification (you have one nearby, do you? ^^)

Directx9 Specular Mapping

How would I implement loading a texture to be used as a specular map for a piece of geometry and rendering it in Directx9 using C++?
Are there any tutorials or basic examples I can refer to?
Use D3DXCreateTextureFromFile to load the file from disk. You then need to set up a shader that multiplies the specular value by the value stored in the texture. This gives you the specular colour.
So you're final pixel comes from
Final = ambient + (N.L * texture colour) + (N.H * texture specular)
You can do this easily in a shader.
Its also worth noting that it can be very useful to store the per texel specular in the alpha channel of the texture. This way you only need one texture around, though it does break per-pixel transparency.