I am creating a 3d scene currently a box and rect, and trying to enable lighting.
When i create a PointLight and add it to Environment everything turns to black color?
all i want to do is create a 3d scene and enable point light, like a sun or rays coming from a point and shading the objects.
Code:
environment = new Environment();
environment.add(new PointLight().set(1f, 1f, 1f, 0, 0, 20f, 100f));
modelBatch=new ModelBatch();
..
square=new ModelBuilder().createBox(300,300,300,new Material(ColorAttribute.createDiffuse(Color.GREEN)),
VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal);
squareinst=new ModelInstance(square);
squareinst.transform.setTranslation(-500,0,0);
--
sprites.get(0).setRotationY(sprites.get(0).getRotationY() + 1f);
sprites.get(1).setRotationY(sprites.get(1).getRotationY() - 1f);
squareinst.transform.rotate(1,0,0,1);
modelBatch.begin(camera);
for(Sprite3D sp:sprites)// has 3d rect models
sp.draw(modelBatch,environment);
modelBatch.render(squareinst,environment);
modelBatch.end();
PointLight turning everything black
Without using environment or lights
as per my investigation, here if pointlight is not working then everything should be black as currently, because the environment needs light, it works fine with Directional light (only the backface of rect is black even after rotations, i don't know why)
libgdx version 1.6.1 - android studio
i checked it on both android device and desktop
please i really need to get this PointLight working, i don't know if it will take a custom shader, if so please guide me to some links because i am not experienced in shaders. I also read about PointLight not working on some device or not working in opengl 2.0 enabled, but i am not sure.
I tried a lot of thing and values. I know about Ambient Light but that has no use to my case. Directional light also has limited usage (can be used as a fallback if this doesn't work).
Edit:
Its working now, check the answer below:
if you are using big camera size or big model size, please try adding more zeros to the pointlight intensity until the light is visible.
Here is a very simple example that shows a point light being rotated around a sphere:
public class PointLightTest extends ApplicationAdapter {
ModelBatch modelBatch;
Environment environment;
PerspectiveCamera camera;
CameraInputController camController;
PointLight pointLight;
Model model;
ModelInstance instance;
#Override
public void create () {
modelBatch = new ModelBatch();
camera = new PerspectiveCamera();
camera.position.set(5f, 5f, 5f);
camera.lookAt(0f, 0f, 0f);
camController = new CameraInputController(camera);
environment = new Environment();
environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1.0f));
environment.add(pointLight = new PointLight().set(0.8f, 0.8f, 0.8f, 2f, 0f, 0f, 10f));
ModelBuilder mb = new ModelBuilder();
model = mb.createSphere(1f, 1f, 1f, 20, 10, new Material(ColorAttribute.createDiffuse(Color.GREEN)), Usage.Position | Usage.Normal);
instance = new ModelInstance(model);
Gdx.input.setInputProcessor(camController);
}
#Override
public void resize (int width, int height) {
camera.viewportWidth = width;
camera.viewportHeight = height;
camera.update();
}
#Override
public void render () {
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
camController.update();
pointLight.position.rotate(Vector3.Z, Gdx.graphics.getDeltaTime() * 90f);
modelBatch.begin(camera);
modelBatch.render(instance, environment);
modelBatch.end();
}
#Override
public void dispose () {
model.dispose();
modelBatch.dispose();
}
}
Note that the light needs to be outside the model and within the range for it to light the model. Try what happens when you gradually move the light away from the model or towards the model. The Renderable in that other example was used to visualize this location of the light.
Related
What I want to do:
The main goal: Use SkiaSharp and OpenTK together. Render 2D and 3D.
What is the problem: SkiaSharp messes up the state of OpenGL, so I can't use it for 3D without saving and restoring some states.
Old solution (with OpenGL < 4): I used GL.PushClientAttrib(ClientAttribMask.ClientAllAttribBits); + some additional values (saved/restored them).
Now i read that this is not necessary the best solution and OpenGL 4 does not have GL.PushClientAttrib anymore. The usual way seems to be that you should use a seperate OpenGL context.
Have seen already: OpenTK multiple GLControl with a single Context
I am not using GLControl because I am not using WinForms. So this is not really helpful. What I tried:
internal class Program
{
public static void Main(string[] args)
{
new Program().Run();
}
private readonly GameWindow _gameWindow;
private IGraphicsContext _context2;
private GlObject _glObject;
private int _programId;
private GlObject _glObject2;
private int _programId2;
public Program()
{
_gameWindow = new GameWindow(800,600,
GraphicsMode.Default, "", GameWindowFlags.Default,
DisplayDevice.Default,
4, 2, GraphicsContextFlags.ForwardCompatible);
_gameWindow.Resize += OnResize;
_gameWindow.RenderFrame += OnRender;
_gameWindow.Load += OnLoad;
}
public void Run()
{
_gameWindow.Run();
}
private void OnLoad(object sender, EventArgs e)
{
_programId = ShaderFactory.CreateShaderProgram();
_glObject = new GlObject(new[]
{
new Vertex(new Vector4(-0.25f, 0.25f, 0.5f, 1f), Color4.Black),
new Vertex(new Vector4(0.0f, -0.25f, 0.5f, 1f), Color4.Black),
new Vertex(new Vector4(0.25f, 0.25f, 0.5f, 1f), Color4.Black),
});
_context2 = new GraphicsContext(GraphicsMode.Default, _gameWindow.WindowInfo, 4, 2,
GraphicsContextFlags.Default);
_context2.MakeCurrent(_gameWindow.WindowInfo);
_programId2 = ShaderFactory.CreateShaderProgram();
_glObject2 = new GlObject(new[]
{
new Vertex(new Vector4(-0.25f, 0.25f, 0.5f, 1f), Color4.Yellow),
new Vertex(new Vector4(0.0f, -0.25f, 0.5f, 1f), Color4.Yellow),
new Vertex(new Vector4(0.25f, 0.25f, 0.5f, 1f), Color4.Yellow),
});
_gameWindow.MakeCurrent();
}
private void OnRender(object sender, FrameEventArgs e)
{
_gameWindow.Context.MakeCurrent(_gameWindow.WindowInfo);
GL.Viewport(0, 0, _gameWindow.Width, _gameWindow.Height);
GL.ClearColor(0.3f,0.1f,0.1f,1);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(_programId);
_glObject.Render();
GL.Flush();
_gameWindow.SwapBuffers();
// i tried different combinations here
// as i read GL.Clear will always clear the whole window
_context2.MakeCurrent(_gameWindow.WindowInfo);
GL.Viewport(10,10,100,100);
//GL.ClearColor(0f, 0.8f, 0.1f, 1);
//GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(_programId2);
_glObject2.Render();
GL.Flush();
_context2.SwapBuffers();
}
private void OnResize(object sender, EventArgs e)
{
var clientRect = _gameWindow.ClientRectangle;
GL.Viewport(0, 0, clientRect.Width, clientRect.Height);
}
}
Vertex shader:
#version 450 core
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
out vec4 vs_color;
void main(void)
{
gl_Position = position;
vs_color = color;
}
Fragment shader:
#version 450 core
in vec4 vs_color;
out vec4 color;
void main(void)
{
color = vs_color;
}
Works fine with a single context, when I use both contexts what happens is: first context gets rendered but flickers. There is no second triangle visible at all (as i understand GL.Viewport it should be visible on the lower left corner of the screen).
You could help me by answering one or more of the following questions:
Is there another way to restore the original context
Is there another way to render with HW acceleration on a part of the screen, ideally having specified OpenGL states for the specific area
How can I get the solution above getting to work the way I want (no flicker but a smaller screen inside a smaller portion of the window)
After trying some more combinations what did the trick was:
Call SwapBuffers only on the last used context in render (even when you use 3 contexts). Then no flicker will occur and rendering seems to work fine. State seems to be independent from each other.
Sorry if question is a bit niche. I wrote some code a few years back and it renders many meshes as 1 big mesh for performance.
What I am trying to do now is just render meshes without textures, so a single colour
boxModel = modelBuilder.createBox(10f, 10f, 10f, Material(ColorAttribute.createDiffuse(Color.WHITE), ColorAttribute.createSpecular(Color.RED), FloatAttribute.createShininess(15f)), (VertexAttributes.Usage.Position or VertexAttributes.Usage.Normal or VertexAttributes.Usage.TextureCoordinates).toLong()) // or VertexAttributes.Usage.TextureCoordinates
for (x in 1..10) {
for (y in 1..10) {
modelInstance = ModelInstance(boxModel, x * 15f, 0.0f, y * 15f)
chunks2[0].addMesh(modelInstance.model.meshes, modelInstance.transform, btBoxShape(Vector3(10f, 10f, 10f)))
}
}
chunks2[0].mergeBaby()
So I build up the giant mesh and then render it
shaderProgram.begin()
texture.bind()
shaderProgram.setUniformMatrix("u_projTrans", camera.combined)
shaderProgram.setAttributef("a_color", 1f, 1f, 1f, 1f)
shaderProgram.setUniformi("u_texture", 0)
renderChunks()
shaderProgram.end()
This works for textured stuff great and the right texture is shown etc but the base colour (I guess that's "a_color" is set to white) where I actually want it to use what I supplied in the Material property.
I'm making a weather simulation in Opengl 4.0 and am trying to create the sky by creating a fullscreen quad in the background. I'm trying to do that by having the vertex shader generate four vertexes and then drawing a triangle strip. Everything compiles just fine and I can see all the other objects I've made before, but the sky is nowhere to be seen. What am I doing wrong?
main.cpp
GLint stage = glGetUniformLocation(myShader.Program, "stage");
//...
glBindVertexArray(FS); //has four coordinates (-1,-1,1) to (1,1,1) in buffer object
glUniform1i(stage, 1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArray(0);
vertex shader
uniform int stage;
void main()
{
if (stage==1)
{
gl_Position = vec4(position, 1.0f);
}
else
{
//...
}
}
fragment shader
uniform int stage;
void main()
{
if (stage==1)
{ //placeholder gray colour so I can see the sky
color = vec4(0.5f, 0.5f, 0.5f, 1.0f);
}
else
{
//...
}
}
I should also mention that I'm a beginner in OpenGL and that it really has to be in OpenGL 4.0 or later.
EDIT:
I've figured out where's the problem, but still don't know how to fix it. The square exists, but only displays if I multiply it with the view and projection matrix (but then it doesn't stay glued to the screen and just rotates along with the rest of the scene, which I do not want). Essentially, I somehow need to switch back to 2D or to screen space or however it's called, draw the square, and switch back to 3D so that all the other objects work fine. How?
The issue was with putting a 1 as the z coord – putting 0.999f instead solved the issue.
I want to add fog to a small 3D world, I tried fiddling with the arguments, however, the fog is not homogeneous.
I have two problems that are maybe linked :
Fog Homogeneity:
When I move or rotate my viewpoint with gluLookAt, the fog is too heavy and all the world is grey.However the are two angles where the rendering of the fog is nice.
The fog seems normal when the camera orentation on the Y axis is 45° or -135° (opposite)
Fog centered on origin of the scene:
When my fog is correctly displayed, it is centered on the (0;0;0) of the scene
Here is the code I use to initialise the fog and the call to gluLookAt
private static final float density = 1f;
private void initFog() {
float[] vertices = {0.8f, 0.8f, 0.8f, 1f};
ByteBuffer temp = ByteBuffer.allocateDirect(16);
temp.order(ByteOrder.nativeOrder());
FloatBuffer fogColor = temp.asFloatBuffer();
fogColor.put(vertices);
GL11.glClearColor(0.8f,0.8f,0.8f,1.0f);
GL11.glFogi(GL11.GL_FOG_MODE, GL11.GL_LINEAR);
GL11.glFog(GL11.GL_FOG_COLOR, temp.asFloatBuffer());
GL11.glFogf(GL11.GL_FOG_DENSITY, density);
GL11.glHint(GL11.GL_FOG_HINT, GL11.GL_FASTEST);
GL11.glFogf(GL11.GL_FOG_START, 1f);
GL11.glFogf(GL11.GL_FOG_END, 10000f);
}
private void initWindow() {
try {
Display.setDisplayMode(new DisplayMode(1600, 900));
Display.create();
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(60f, 1600f / 900f, 3, 100000);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glEnable(GL11.GL_FOG);
GL11.glEnable(GL11.GL_DEPTH_TEST);
initFog();
initParticles();
} catch (LWJGLException e) {
Display.destroy();
System.exit(1);
}
}
Called from the updatePosition function inside main loop
The angle parameter is the direction of the viewport on y axis and yCpos is a value between -1 and 1 that I use to look up or down.
GL11.glLoadIdentity();
GLU.gluLookAt(xpos, ypos, zpos, xpos + (float)Math.cos(angle), ypos+ yCpos, zpos+ (float)Math.sin(angle), 0, 1, 0);
I was drawing the ground with one giant quad, and now I draw the ground with tiles, and the problem isn't happening any more. Therefore, the cause remains mysterious, but the problem is solved.
I'm writing an engine and using Light 0 as the "sun" for the scene. The sun is a directional light.
I setup the scene's Ortho viewpoint, then setup the light to be on the "East" side of the screen (and to the character) (x/y are coordinates of the plane terrain, with a positive z facing the camera and indicating "height" on the terrain -- the scene is also rotated for an isometric view on the x axis).
The light seems to be shining fine "East" of 0,0,0, but as the character moves it does not shift (CenterCamera does a glTranslate3f on the negative of the values provided, such that they can be mapped specifying world coordinates). Meaning, the further I move to the west, it's ALWAYS dark, with no light.
Graphics.BeginRenderingLayer();
{
Video.MapRenderingMode();
Graphics.BeginLightingLayer( Graphics.AmbientR, Graphics.AmbientG, Graphics.AmbientB, Graphics.DiffuseR, Graphics.DiffuseG, Graphics.DiffuseB, pCenter.X, pCenter.Y, pCenter.Z );
{
Graphics.BeginRenderingLayer();
{
Graphics.CenterCamera( pCenter.X, pCenter.Y, pCenter.Z );
RenderMap( pWorld, pCenter, pCoordinate );
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer();
{
Graphics.DrawMan( pCenter );
}
Graphics.EndRenderingLayer();
}
Graphics.EndLightingLayer();
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer = PushMatrix, EndRenderingLayer = PopMatrix Video.MapRenderingMode = Ortho Projection and Scene Rotation/Zoom CenterCamera does a translate to the opposite of the X/Y/Z, such that the character is now centered at X/Y/Z in the middle of the screen.
Any thoughts? Maybe I've confused some of my code here a little?
The lighting code is as follows:
public static void BeginLightingLayer( float pAmbientRed, float pAmbientGreen, float pAmbientBlue, float pDiffuseRed, float pDiffuseGreen, float pDiffuseBlue, float pX, float pY, float pZ )
{
Gl.glEnable( Gl.GL_LIGHTING );
Gl.glEnable( Gl.GL_NORMALIZE );
Gl.glEnable( Gl.GL_RESCALE_NORMAL );
Gl.glEnable( Gl.GL_LIGHT0 );
Gl.glShadeModel( Gl.GL_SMOOTH );
float[] AmbientLight = new float[4] { pAmbientRed, pAmbientGreen, pAmbientBlue, 1.0f };
float[] DiffuseLight = new float[4] { pDiffuseRed, pDiffuseGreen, pDiffuseBlue, 1.0f };
float[] PositionLight = new float[4] { pX + 10.0f, pY, 0, 0.0f };
//Light position of Direction is 5 to the east of the player.
Gl.glLightfv( Gl.GL_LIGHT0, Gl.GL_AMBIENT, AmbientLight );
Gl.glLightfv( Gl.GL_LIGHT0, Gl.GL_DIFFUSE, DiffuseLight );
Gl.glLightfv( Gl.GL_LIGHT0, Gl.GL_POSITION, PositionLight );
Gl.glEnable( Gl.GL_COLOR_MATERIAL );
Gl.glColorMaterial( Gl.GL_FRONT_AND_BACK, Gl.GL_AMBIENT_AND_DIFFUSE );
}
You will need to provide normals for each surface. What is happening (without normals) is the directional light is essentially shining on everything east of zero, positionally, while everything there has a normal of 0,0,1 (it faces west.)
You do not need to send normals with each vertex as far as I can tell, but rather because GL is a state machine, you need to make sure that whenever the normal changes you change it. So if you're rendering a face on a cube, the 'west' face should have a single call
glNormal3i(0,0,1);
glTexCoord..
glVertex3f...
glTexCoord..
etc.
In the case of x-y-z aligned rectangular prisms, 'integers' are sufficient. For faces that do not face one of the six cardinal directions, you will need to normalize them. In my experience you only need to normalize the first three points unless the quad is not flat. This is done by finding the normal of the triangle formed by the first three sides in the quad.
There are a few simple tuts on 'Calculating Normals' that I found enlightening.
The second part of this is that since it is a directional light, (W=0) repositioning it with the player position doesn't make sense. Unless the light itself is being emitted from behind the camera and you are rotating an object in front of you (like a model) that you wish to always be front-lit, its position should probably be something like
float[] PositionLight = new float[4] { 0.0f, 0.0f, 1.0f, 0.0f };
Or, if the GLx direction is being interpreted as the East-West direction (i.e. you initially are facing north/south)
float[] PositionLight = new float[4] { 1.0f, 0.0f, 0.0f, 0.0f };
The concept is that you are calculating the light per-face, and if the light doesn't move and the scene itself is not moving (just the camera moving around the scene) the directional calculation will always remain correct. Provided the normals are accurate, GL can figure out the intensity of light showing on a particular face.
The final thing here is that GL will not automatically handle shadows for you. Basic GL_Light is sufficient for a controlled lighting of a series of convex shapes, so you will have to figure out whether or not a light (such as the sun) should be applied to a face. In some cases this is just taking the solid the face belongs to and seeing if the vector of the sun's light intersects with another solid before reaching the 'sky'.
Look for stuff on lightmaps as well as shadowmapping for this.
One thing that can trip up many people is that the position sent to glLightFv is translated by the current matrix stack. Thus if you want to have your light set to a specific position in world coordinates, your camera and projection matrices must be set and active on the matrix stack at the time of the glLightFv call.