Procedural / Dynamic colouring of a skybox in OpenGL - c++

I have been developing an application that needs to render a procedural sky, what I mean by this is that the sky has a day night cycle that changes depending on what time it is within the simulation.
I have seen a method somewhere in the past where they have a colormap sush as this:
Now depending on some variable, such as time, somehow the code scans over the image and uses a range of colors for the texture of the sky. Also during sunrise / sunset the code will scan to a yellow,orange,red color as on the right.
I'm not sure what this is called but I think that it is what I'm after. I would love if anyone would show me or point me to an example of using this technique with opengl and C++.
On a side note my skybox is not your average shape its more of a sky-right-angle as below
As you can see there is no top to the sky-right-angle it is only the two blue sides that you see that will have the sky rendered (Black is the BG). I was wondering if there was any way that I could render a procedural/dynamic night-day sky on these two plains (without the seam being noticeable between them also) and as a side question also have it so the top of the plains fade out to alpha no matter it its night or day
Any explanation/example on how to scan a colormap then set it as a texture in OpenGL / C++ is greatly appreciated.

Download the last project(Special Effects) in this url http://golabistudio.com/siamak/portfolio.html
The c++ source is not available but the shader source is there.
What you need to do is pass 2 textures to your shader(while rendering the plane). the first texture is your standard skybox texture. the second texture is your day/night cycle texture. at it's simplest it can be a wide gradient texture of height 1, going from blue to dark. with this second texture passed to your shader you can pick-up one pixel out of it at position of x=time and add the color to your diffuse texture(the first texture).
The next step is having the sunrise. Again at it's simplest to do this, you have to create a texture of width 2 with one side the sunrise pixels horizontally and the another the night gradient:(enlarged in width)
http://i.imgur.com/Rl8XJ.png
Now having your incoming uv coordinate for the diffuse texture(the first skybox texture) you take the following
float2 uvpos;
uvpos.y = IN.uv.y;//the position same as skybox texture
uvpos.x = 0;//a sample from first horizontal pixel
float4 colour1 = tex2D( gradientTexture, uvpos);
uvpos.x = 0.5;//a sample from second horizontal pixel
float4 colour2 = tex2D( gradientTexture, uvpos);
float4 skycolor = lerp(colour1 , colour2, (your day time));
skycolor.xyz += tex2D(skyTexture, IN.uv).xyz;
It's very simple implementation, but should get you going i think.

Related

create graphics asset, that can be used for different team colors [duplicate]

This question already has answers here:
Recolor sprites on the fly
(2 answers)
Colorize sprites from grayscale to color
(1 answer)
Closed 1 year ago.
I'm making a 2D game, where some graphics assets must be used with 2 user-selected team colors.
What workflow do game developers use, so that the graphics artist only need to draw each asset once, and the code allows the end-user to choose two team colors, that the asset will be rendered in.
Notice, that each color may be draw with antialias to the background (transparent), or to another color.
Rendering is done with OpenGL
If you only want solid colors, you can pass in colors alongside each vertex, pass the color from the vertex to fragment shader, then simply setting the fragment color in the fragment shader to that value.
If you want to use textures that can be colored or some other more complex scenario, you need to use mix with the vertex color and the texture color.
The most common/simple approaches would be:
Use a shader and a second alpha channel, apply the team colour (in varying amounts based on alpha channel) at draw time. This extra channel is wasteful however, and depending on your use case shaders may not be appropriate
There presumedly aren't thousands of teams, so you could just precalculate sprite sheets per-team using the same method as #1 but - eg. - at application/level startup. The memory overhead for this is negligible, and the performance impact is nil
NB: In your source sprite you likely want a neutral colour wherever team colours will be so that the blending works correctly.
Example of #1 / #2 in shadertoy: https://www.shadertoy.com/view/ssc3WM
Notes re. example:
This example generates a "team alpha" in the Buffer A tab, but this could just as easily be authored manually.
I added a colour cycle just to more clearly demonstrate that the team colour is dynamic
Whether you choose to do it in realtime, or precalculated - the method is the same:
// Team color (eg. RED)
vec3 teamColor = vec3(1.0, 0., 0.);
// Pixel color of the original sprite
vec3 spriteColor = texture(/* your sprite texture */, uv).xyz;
// Get just the red component of our black and white mask input
float teamAlpha = texture(/* your team mask texture */, uv).x;
// blend the original sprite with the team colour, based on the team alpha
// and (optional) "intensity" of 0..1 (allowing you to turn down the blend)
vec3 col = mix(spriteColor, teamColor, teamAlpha * teamColorIntensity);
EDIT: adding screenshots from the shadertoy example since it was questioned whether blending and antialiasing works (it does).
Note that the team mask has varying alpha, so is doing blending - shadertoy doesn't allow uploading images so we're stuck with nyan cat
Here's the same shadertoy with linear interpolation active, note that the blended crumbs' edges and body receive varying amounts of the "team" colour
Visual example in case the shadertoy eventually vanishes:
Inputs:
Result:
(*forgive my cruddy pixel art, look at the Shadertoy example for a better demo)

Is there a way to partially tint bitmaps in Allegro?

I'm programming a game in Allegro 5 and I'm currently working on my drawing algorithm. After calculation I end up with two ALLEGRO_BITMAP*-objects where one of them is my "scene" with the terrain drawn on it and the other one is a shadow-map.
The scene are simply textures of game-elements drawn on the bitmap.
The shadow-map is a bitmap using black colors for light and white colors for shadows rendered previously.
For drawing those bitmaps on the screen I use al_draw_scaled_bitmap(...) and al_set_blender(ALLEGRO_DEST_MINUS_SRC, ALLEGRO_ONE, ALLEGRO_ONE) to substract the white elements of the shadow-map from the scene to make those visible.
The problem I have is that I want to have all pixels that are white on the shadows-map to be tinted in a world-color, which has been calculated in every frame before, and all black elements simply not modified (gray means partially tinted).
The final color could be calculated like p.r * c.r + 1 - p.r with p = the pixel-color on the scene and c = the world-color for red, green and blue channels in rgb.
Is there any way to achieve a partial tinting effect in Allegro 5 (possibly without massive overdraw)?
I thought of using shaders, but I haven't found a solution to implement these with my ALLEGRO_BITMAP*-objects.
Allegro's blenders are fairly simple, so you would need a shader for this. Simply write a shader, and when you're drawing your shadow map al_use_shader the shader and then al_use_shader(NULL) when you're done. The shader can use the default vertex source which you can get with al_get_default_shader_source. So you only have to write the fragment shader. Your fragment shader should have a uniform vec4 which is the tint colour. You would also have x, y floats which respresent the x and y values of the destination. It would also have a sampler which is the scene bitmap (which should be the target, since you're drawing to it). You'd sample from the sampler at x, y (x and y should be from 0 to 1, you can get these by using xpixel / width (same with y)) and then do your calculation with the input colour, and then store the result in gl_FragColor (this text assume you're using GLSL (OpenGL). The same principle applies to Direct3D.) Check out the default shaders in src/shader_source.inc in the Allegro source code for more info on writing shaders.

Write to texture GLSL

I want to be able to (in fragment shader) add one texture to another. Right now I have projective texturing and want to expand on that.
Here is what I have so far :
Im also drawing the viewfrustum along which the blue/gray test image is projected onto the geometry that is in constant rotation.
My vertex shader:
ProjTexCoord = ProjectorMatrix * ModelTransform * raw_pos;
My Fragment Shader:
vec4 diffuse = texture(texture1, vs_st);
vec4 projTexColor = textureProj(texture2, ProjTexCoord);
vec4 shaded = diffuse; // max(intensity * diffuse, ambient); -- no shadows for now
if (ProjTexCoord[0] > 0.0 ||
ProjTexCoord[1] > 0.0 ||
ProjTexCoord[0] < ProjTexCoord[2] ||
ProjTexCoord[1] < ProjTexCoord[2]){
diffuse = shaded;
}else if(dot(n, projector_aim) < 0 ){
diffuse = projTexColor;
}else{
diffuse = shaded;
}
What I want to achieve:
When for example - the user presses a button, I want the blue/gray texture to be written to the gray texture on the sphere and rotate with it. Imagine it as sort of "taking a picture" or painting on top of the sphere so that the blue/gray texture spins with the sphere after a button is pressed.
As the fragment shader operates on each pixel it should be possible to copy pixel-by-pixel from one texture to the other, but I have no clue how, I might be googling for the wrong stuff.
How can I achieve this technically? What method is most versatile? Suggestions are very much appreciated, please let me know If more code is necessary.
Just to be clear, you'd like to bake decals into your sphere's grey texture.
The trouble with writing to the grey texture while drawing another object is it's not one to one. You may be writing twice or more to the same texel, or a single fragment may need to write to many texels in your grey texture. It may sound attractive as you already have the coordinates of everything in the one place, but I wouldn't do this.
I'd start by creating a texture containing the object space position of each texel in your grey texture. This is key, so that when you click you can render to your grey texture (using an FBO) and know where each texel is in your current view or your projective texture's view. There may be edge cases where the same bit of texture appears on multiple triangles. You could do this by rendering your sphere to the grey texture using the texture coordinates as your vertex positions. You probably need a floating point texture for this, and the following image probably isn't the sphere's texture mapping, but it'll do for demonstration :P.
So when you click, you render a full screen quad to your grey texture with alpha blending enabled. Using the grey texture object space positions, each fragment computes the image space position within the blue texture's projection. Discard the fragments that are outside the texture and sample/blend in those that are inside.
I think you are overcomplicating things.
Writes to textures inside classic shaders (i.e. not compute shader) are only implemented for latest hardware and very latest OpenGL versions and extensions.
It could be terribly slow if used wrong. It's so easy to introduce pipeline stalls and CPU-GPU sync points
Pixel shader could become a terribly slow unmaintainable mess of branches and texture fetches.
And all this mess will be done for every single pixel every single frame
Solution: KISS
Just update your texture on CPU side.
Write to texture, replacing parts of it with desired content
Update is only need to be done once and only when you need this. Data persists until you rewrite it (not even once per frame, but only once per change request)
Pixel shader is dead brain simple: no branching, one texture
To get target pixels, implement ray-picking (you will need it anyway for any non-trivial interactive 3D-graphics program)
P.S. "Everything should be made as simple as possible, but not simpler." Albert Einstein.

Outline effects in OpenGL

In OpenGL, I can outline objects by drawing the object normally, then drawing it again as a wireframe, using the stencil buffer so the original object is not drawn over. However, this results in outlines with one solid color.
In this image, the pixels of the creature's outline seem to get more transparent the further they are from the creature they outline. How can I achieve a similar effect with OpenGL?
They did not use wireframe for this. I guess it is heavily shader related and requires this:
Rendering object to a stencil buffer
Rendering stencil buffer with color of choice while applying blur
Rendering model on top of it
I'm late for an answer but I was trying to achieve the same thing and thought I'd share the solution I'm using.
A similar effect can be achieved in a single draw operation with a not so complex shader.
In the fragment shader, you will calculate the color of the fragment based on lightning and texture giving you the un-highlighted color 'colorA'.
Your second color is the outline color, 'colorB'.
You should obtain the fragment to camera vector, normalize it, then get the dot product of this vector with the fragment's normal.
The fragment to camera vector is simply the inverse of the fragment's position in eye-space.
The colour of the fragment is then:
float CameraFacingPercentage = dot(v_fragmentToCamera, v_Normal);
gl_FragColor = ColorA * CameraFacingPercentage + ColorB * (1 - FacingCameraPercentage);
This is the basic idea but you'll have to play around to have more or less of the outline color. Also, the concave parts of your model will also be highlighted but that is also the case in the image posted in the question.
Detect edges in GLSL shader using dotprod(view,normal)
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
As far as I see it the effect on the screen short and many "edge" effects are not pure edges, as in comic outline. What mostly is done, you have one pass were you render the object normally then a pass with only the geometry (no textures) and a GLSL shader. In the fragment shader the normal is taken and that normal is perpendicular to the camera vector you color the object. The effect is then smoothed by including area close to perfect perpendicular.
I would have to look up the exact math but I think if you take the dot product of the camera vector and the normal you get the amount of "perpendicularness". That you can then run through a function like exp to get a bias towards 1.
So (without guarantee that it is correct):
exp(dot(vec3(0, 0, 1), normal));
(Note: everything is in screenspace.)

Darkening quad to simulate AO

I want to make a color darker based on a occlusion factor I calculate on my map, I need it to simulate AO, I thought about blending the face with a specified dark color texture as a one of the possible solutions, I am wondering if there is anything I can do without using textures.
Actually the engine is minecraft like and I calculate the occlusion factor based on the voxels surroundings the face I am taking in consideration.
I am not looking for a complicated method because is only the first implementation.
cheers
I need a very simple effect right now, ssao is out of the question for many reasons. I think
the only solution is to color the vertices but how can I do it in a way that the face does not look completely black?
What do you think about the second solution to blend a texture with the quad color?
In "Graphics Setting 1" = vertex coloring and "Graphics Setting 2" = texture blending.