I am starting to learn OpenGL now, using the Go programming language (I just couldn't get working with C/C++ on my Windows machine), and so far I've managed to display some rotating cubes in the screen with textures mainly copying and pasting code from tutorials. I've learned a lot, though, but I just can't get some text on the screen with this code that I wrote on my own. I've looked up many tutorials and questions but nothing seems to work, and I suspect there is something wrong with the vertices because I'm pretty sure the textures coordinates are correct and still there's nothing showing up in the screen. Here's the code:
package game
import (
"fmt"
"io/ioutil"
"image"
"image/draw"
"github.com/go-gl/gl/v3.3-core/gl"
mgl "github.com/go-gl/mathgl/mgl32"
"github.com/golang/freetype/truetype"
"golang.org/x/image/font"
"golang.org/x/image/math/fixed"
)
type GameFont struct {
loaded bool
vao uint32
vbo VBOData
pix float32
Texture *Texture
Shader ShaderProgram
}
// Load a TrueType font from a file and generate a texture
// with all important characters.
func (f *GameFont) Load(path string, pix float32) {
contents, err := ioutil.ReadFile(path)
if err != nil {
fmt.Println("Could not read font file: " + path)
panic(err)
}
fontFace, err := truetype.Parse(contents)
if err != nil {
fmt.Println("Could not parse font file: " + path)
panic(err)
}
// Create a texture for the characters
// Find the next power of 2 for the texture size
size := nextP2(int(pix * 16))
fg, bg := image.White, image.Black
rgba := image.NewRGBA(image.Rect(0, 0, size, size))
draw.Draw(rgba, rgba.Bounds(), bg, image.ZP, draw.Src)
d := &font.Drawer{
Dst: rgba,
Src: fg,
Face: truetype.NewFace(fontFace, &truetype.Options{
Size: float64(pix),
DPI: 72,
Hinting: font.HintingNone,
}),
}
// Some GL preps
gl.GenVertexArrays(1, &f.vao)
gl.BindVertexArray(f.vao)
f.vbo.Create()
f.vbo.Bind()
f.Shader = newShaderProgram("data/shaders/font.vert", "data/shaders/font.frag")
f.Shader.Use()
f.Shader.SetUniform("tex", 0)
// Create vertex data (and coordinates in the texture) for each character
// All characters below 32 are useless
for i := 32; i < 128; i++ {
c := string(rune(i))
x, y := i % 16, i / 16
// Draw the character on the texture
d.Dot = fixed.P(x * int(pix), y * int(pix))
d.DrawString(c)
// Vertices
quads := []float32{
0, 0,
0, pix,
pix, 0,
pix, pix,
}
norm := func(n int) float32 {
return float32(n) / 16.0
}
// Texture coordinates (normalized)
texQuads := []float32{
norm(x), 1.0 - norm(y + 1),
norm(x), 1.0 - norm(y),
norm(x + 1), 1.0 - norm(y + 1),
norm(x + 1), 1.0 - norm(y),
}
for v := 0; v < 8; v += 2 {
vQuads, vTexQuads := quads[v:(v+2)], texQuads[v:(v+2)]
// Data is like (X, Y, U, V)
f.vbo.AppendData(vQuads, 2)
f.vbo.AppendData(vTexQuads, 2)
}
}
// Upload data to GPU and we're done
f.Texture = newTextureFromRGBA(rgba)
f.Texture.Bind()
f.Texture.SetGLParam(gl.TEXTURE_MIN_FILTER, gl.LINEAR)
f.Texture.SetGLParam(gl.TEXTURE_MAG_FILTER, gl.LINEAR)
f.Texture.Upload()
f.vbo.UploadData(gl.STATIC_DRAW)
gl.EnableVertexAttribArray(0)
gl.VertexAttribPointer(0, 2, gl.FLOAT, false, 4*4, gl.PtrOffset(0))
gl.EnableVertexAttribArray(1)
gl.VertexAttribPointer(1, 2, gl.FLOAT, false, 4*4, gl.PtrOffset(2*4))
f.loaded = true
}
// Render a text using the font
func (f *GameFont) Render(text string, x, y int, pix float32, color mgl.Vec4) {
if !f.loaded {
return
}
gl.Disable(gl.DEPTH_TEST)
gl.Enable(gl.BLEND)
gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
gl.BindVertexArray(f.vao)
f.Shader.Use()
f.Shader.SetUniform("projection", mgl.Ortho2D(0, _screen.Width, 0, _screen.Height))
f.Shader.SetUniform("color", color)
f.Texture.Bind()
scale := pix / f.pix
for i := 0; i < len(text); i++ {
index := rune(text[i])
model := mgl.Ident4().Mul4(mgl.Scale3D(scale, scale, 0))
model = model.Add(mgl.Translate3D(float32(x) + float32(i) * pix, float32(y), 0))
f.Shader.SetUniform("model", model)
gl.DrawArrays(gl.TRIANGLE_STRIP, (32-index)*4, 4)
}
gl.Enable(gl.DEPTH_TEST)
gl.Disable(gl.BLEND)
}
Here's the shaders:
Vertex shader
#version 330
uniform mat4 projection;
uniform mat4 model;
layout (location = 0) in vec2 vert;
layout (location = 1) in vec2 vertTexCoord;
out vec2 fragTexCoord;
void main() {
fragTexCoord = vertTexCoord;
gl_Position = projection * model * vec4(vert, 0, 1);
}
Fragment shader
#version 330
uniform sampler2D tex;
uniform vec4 color;
in vec2 fragTexCoord;
out vec4 outputColor;
void main() {
outputColor = color * texture(tex, fragTexCoord);
}
Every "component" of the GameFont struct is working properly (I've used them with the rotating cubes), so every function calls the GL corresponding one.
Also the texture is being drawed correctly, I've saved it to the disk and it looks like this:
And still, there's no text on the screen.
Related
I'm trying to iterate over a large amount of data in my fragment shader in webgl. I want to pass a lot of data to it and then iterate on each pass of the fragment shader. I'm having some issues doing that though. My ideas were the following:
1. pass the data in uniforms to the frag shader, but I can't send very much data that way.
2. use a buffer to send data as I do verts to the vert shader and then use a varying to send data to the frag shader. unfortunately this seems to involve some issues. (a) varying's interpolate between vectors and I think that'll cause issues with my code (although perhaps this is unavoidable ) (b) more importantly, I don't know how to iterate over the data i pass to my fragment shader. I'm already using a buffer for my 3d point coordinates, but how does webgl handle a second buffer and data coming through it.
* I mean to say, in what order is data fetched from each buffer (my first buffer containing 3d coordinates and the second buffer I'm trying to add)? lastly, as stated above, if i want to iterate over all the data passed for every pass of the fragment shader, how can i do that? *
i've already tried using a uniform array and iterate over that in my fragment shader but i ran into limitations I believe since there is a relatively small size limit for uniforms. I'm currently trying the second method mentioned above.
//pseudo code
vertexCode = `
attribute vec4 3dcoords;
varying vec4 3dcoords;
??? ??? my_special_data;
void main(){...}
`
fragCode = `
varying vec4 3dcoords;
void main(){
...
// perform math operation on 3dcoords for all values in my_special_data variable and store in variable my_results
if( my_results ... ){
gl_FragColor += ...;
}
`
Textures in WebGL are random access 2D arrays of data so you can use them to read lots of data
Example:
const width = 256;
const height = 256;
const vs = `
attribute vec4 position;
void main() {
gl_Position = position;
}
`;
const fs = `
precision highp float;
uniform sampler2D tex;
const int width = ${width};
const int height = ${height};
void main() {
vec4 sums = vec4(0);
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
vec2 xy = (vec2(x, y) + 0.5) / vec2(width, height);
sums += texture2D(tex, xy);
}
}
gl_FragColor = sums;
}
`;
function main() {
const gl = document.createElement('canvas').getContext('webgl');
// check if we can make floating point textures
const ext1 = gl.getExtension('OES_texture_float');
if (!ext1) {
return alert('need OES_texture_float');
}
// check if we can render to floating point textures
const ext2 = gl.getExtension('WEBGL_color_buffer_float');
if (!ext2) {
return alert('need WEBGL_color_buffer_float');
}
// make a 1x1 pixel floating point RGBA texture and attach it to a framebuffer
const framebufferInfo = twgl.createFramebufferInfo(gl, [
{ type: gl.FLOAT, },
], 1, 1);
// make random 256x256 texture
const data = new Uint8Array(width * height * 4);
for (let i = 0; i < data.length; ++i) {
data[i] = Math.random() * 256;
}
const tex = twgl.createTexture(gl, {
src: data,
minMag: gl.NEAREST,
wrap: gl.CLAMP_TO_EDGE,
});
// compile shaders, link, lookup locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// create a buffer and put a 2 unit
// clip space quad in it using 2 triangles
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
},
});
// render to the 1 pixel texture
gl.bindFramebuffer(gl.FRAMEBUFFER, framebufferInfo.framebuffer);
// set the viewport for 1x1 pixels
gl.viewport(0, 0, 1, 1);
gl.useProgram(programInfo.program);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.activeTexture, gl.bindTexture, gl.uniformXXX
twgl.setUniforms(programInfo, {
tex,
});
const offset = 0;
const count = 6;
gl.drawArrays(gl.TRIANGLES, offset, count);
// read the result
const pixels = new Float32Array(4);
gl.readPixels(0, 0, 1, 1, gl.RGBA, gl.FLOAT, pixels);
console.log('webgl sums:', pixels);
const sums = new Float32Array(4);
for (let i = 0; i < data.length; i += 4) {
for (let j = 0; j < 4; ++j) {
sums[j] += data[i + j] / 255;
}
}
console.log('js sums:', sums);
}
main();
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
I have a 3D Webgl scene. I am using Regl http://regl.party/ . Which is WebGL. So I am essentially writing straight GLSL.
This is a game project. I have an array of 3D positions [[x,y,z] ...] which are bullets, or projectiles. I want to draw these bullets as a simple cube, sphere, or particle. No requirement on the appearance.
How can I make shaders and a draw call for this without having to create a repeated duplicate set of geometry for the bullets?
Preferring an answer with a vert and frag shader example that demonstrates the expected data input and can be reverse engineered to handle the CPU binding layer
You create an regl command which encapsulates a bunch of data. You can then call it with an object.
Each uniform can take an optional function to supply its value. That function is passed a regl context as the first argument and then the object you passed as the second argument so you can call it multiple times with a different object to draw the same thing (same vertices, same shader) somewhere else.
var regl = createREGL()
const objects = [];
const numObjects = 100;
for (let i = 0; i < numObjects; ++i) {
objects.push({
x: rand(-1, 1),
y: rand(-1, 1),
speed: rand(.5, 1.5),
direction: rand(0, Math.PI * 2),
color: [rand(0, 1), rand(0, 1), rand(0, 1), 1],
});
}
function rand(min, max) {
return Math.random() * (max - min) + min;
}
const starPositions = [[0, 0, 0]];
const starElements = [];
const numPoints = 5;
for (let i = 0; i < numPoints; ++i) {
for (let j = 0; j < 2; ++j) {
const a = (i * 2 + j) / (numPoints * 2) * Math.PI * 2;
const r = 0.5 + j * 0.5;
starPositions.push([
Math.sin(a) * r,
Math.cos(a) * r,
0,
]);
}
starElements.push([
0, 1 + i * 2, 1 + i * 2 + 1,
]);
}
const drawStar = regl({
frag: `
precision mediump float;
uniform vec4 color;
void main () {
gl_FragColor = color;
}`,
vert: `
precision mediump float;
attribute vec3 position;
uniform mat4 mat;
void main() {
gl_Position = mat * vec4(position, 1);
}`,
attributes: {
position: starPositions,
},
elements: starElements,
uniforms: {
mat: (ctx, props) => {
const {viewportWidth, viewportHeight} = ctx;
const {x, y} = props;
const aspect = viewportWidth / viewportHeight;
return [.1 / aspect, 0, 0, 0,
0, .1, 0, 0,
0, 0, 0, 0,
x, y, 0, 1];
},
color: (ctx, props) => props.color,
}
})
regl.frame(function () {
regl.clear({
color: [0, 0, 0, 1]
});
objects.forEach((o) => {
o.direction += rand(-0.1, 0.1);
o.x += Math.cos(o.direction) * o.speed * 0.01;
o.y += Math.sin(o.direction) * o.speed * 0.01;
o.x = (o.x + 3) % 2 - 1;
o.y = (o.y + 3) % 2 - 1;
drawStar(o);
});
})
<script src="https://cdnjs.cloudflare.com/ajax/libs/regl/1.3.11/regl.min.js"></script>
You can draw all of the bullets as point sprites, in which case you just need to provide the position and size of each bullet and draw them as GL_POINTS. Each “point” is rasterized to a square based on the output of your vertex shader (which runs once per point). Your fragment shader is called for each fragment in that square, and can color the fragment however it wants—with a flat color, by sampling a texture, or however else you want.
Or you can provide a single model for all bullets, a separate transform for each bullet, and draw them as instanced GL_TRIANGLES or GL_TRIANGLE_STRIP or whatever. Read about instancing on the OpenGL wiki.
Not a WebGL coder so read with prejudice...
Encode the vertexes in a texture
beware of clamping use texture format that does not clamp to <0.0,+1.0> like GL_LUMINANCE32F_ARB or use vertexes in that range only. To check for clamping use:
GLSL debug prints
Render single rectangle covering whole screen
and use the texture from #1 as input. This will ensure that a fragment shader is called for each pixel of the screen/view exactly once.
Inside fragment shader read the texture and check the distance of a fragment to your vertexes
based on it render your stuff or dicard() fragment... spheres are easy, but boxes and other shapes might be complicated to render based on the distance of vertex especially if they can be arbitrary oriented (which need additional info in the input texture).
To ease up this you can prerender them into some texture and use the distance as texture coordinates ...
This answer of mine is using this technique:
raytrace through 3D mesh
You can sometimes get away with using GL_POINTS with a large gl_PointSize and a customized fragment shader.
An example shown here using distance to point center for fragment alpha. (You could also just as well sample a texture)
The support for large point sizes might be limited though, so check that before deciding on this route.
var canvas = document.getElementById('cvs');
gl = canvas.getContext('webgl');
var vertices = [
-0.5, 0.75,0.0,
0.0, 0.5, 0.0,
-0.75,0.25,0.0,
];
var vertex_buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertex_buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
var vertCode =
`attribute vec3 coord;
void main(void) {
gl_Position = vec4(coord, 1.0);
gl_PointSize = 50.0;
}`;
var vertShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertShader, vertCode);
gl.compileShader(vertShader);
var fragCode =
`void main(void) {
mediump float ds = distance(gl_PointCoord.xy, vec2(0.5,0.5))*2.0;
mediump vec4 fg_color=vec4(0.0, 0.0, 0.0,1.0- ds);
gl_FragColor = fg_color;
}`;
var fragShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragShader, fragCode);
gl.compileShader(fragShader);
var shaderProgram = gl.createProgram();
gl.attachShader(shaderProgram, vertShader);
gl.attachShader(shaderProgram, fragShader);
gl.linkProgram(shaderProgram);
gl.useProgram(shaderProgram);
gl.bindBuffer(gl.ARRAY_BUFFER, vertex_buffer);
var coord = gl.getAttribLocation(shaderProgram, "coord");
gl.vertexAttribPointer(coord, 3, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(coord);
gl.viewport(0,0,canvas.width,canvas.height);
gl.drawArrays(gl.POINTS, 0, 3);
<!doctype html>
<html>
<body>
<canvas width = "400" height = "400" id = "cvs"></canvas>
</body>
</html>
Can anybody help check why this OpenGL compute shader results in APPCRASH on my Nvidia GT440, Windows 7 64bit? I want to learn about the memory order defined in GLSL, so I write this simple shader. I don't think there is any problem about the shader code.
The sizes of image0, image1 and output_image are all 1024x768. output_image is only used to check if the result is correct.
The work group size is 16x16. The number of work groups launched in the X/Y/Z dimension is 1024/16, 768/16 and 1, i.e. glDispatchCompute(64, 48, 1). If x index of the invocation ID is even, it sets the pixel (x,y) of image0, then image1. If x index of the invocation ID is odd, it reads image1 and waits until the corresponding pixel (x-1, y) of image1 is set, then reads image0.
#version 450 core
layout (local_size_x = 16, local_size_y = 16) in;
layout (binding = 0, rgba8ui) uniform uimage2D output_image;
layout (binding = 1, r32ui) uniform volatile coherent uimage2D image0;
layout (binding = 2, r32ui) uniform volatile coherent uimage2D image1;
void main(void)
{
// thread 0 of each work group clears a portion of output image
if (gl_LocalInvocationIndex == 0) {
uvec2 start = gl_WorkGroupSize.xy * gl_WorkGroupID.xy;
for (uint j = start.y; j < start.y + gl_WorkGroupSize.y; ++j)
for (uint i = start.x; i < start.x + gl_WorkGroupSize.x; ++i)
imageStore(output_image, ivec2(i, j), uvec4(0));
}
barrier();
if (gl_GlobalInvocationID.x % 2 == 0) {
// store to image0, then image1
imageStore(image0, ivec2(gl_GlobalInvocationID.xy), uvec4(1));
imageStore(image1, ivec2(gl_GlobalInvocationID.xy), uvec4(1));
}
else {
ivec2 coord = ivec2(gl_GlobalInvocationID.xy) - ivec2(1, 0);
// wait for image1 to be set
uint flag;
do {
flag = imageLoad(image1, coord).x;
}
while (flag != 1);
// check if image0 is set
uint color = imageLoad(image0, coord).x * 255;
// write output image
imageStore(output_image, coord, uvec4(flag));
imageStore(output_image, ivec2(gl_GlobalInvocationID.xy), uvec4(color));
}
}
I am working on a simple opengl game to learn more about it. But for some reason when I try to rotate my cube over time it becomes stretched. You can see it in the photo:
I think it has to do with my model matrix but I'm not sure. Here is a portion of my code:
model := mgl32.Ident4()
model = model.Mul4(mgl32.HomogRotate3D(float32(glfw.GetTime()) *
mgl32.DegToRad(50.0), mgl32.Vec3{0.5, 1.0, 0.0}))
view := mgl32.Ident4()
view = view.Mul4(mgl32.Translate3D(0.0, 0.0, -3.0))
projection := mgl32.Ident4()
projection = mgl32.Perspective(mgl32.DegToRad(45.0), 800/ float32(600), 0.1, 100)
shader.setMat4("model", model)
shader.setMat4("view", view)
shader.setMat4("projection", projection)
Here is my Minimal, Complete, and Verifiable example. I've removed the texture as it's not necessary to see the problem. I am also using wireframe so that you can see the stretching more clearly.
package main
import(
"github.com/go-gl/glfw/v3.2/glfw"
"github.com/go-gl/gl/v4.5-core/gl"
"log"
"strings"
"fmt"
"github.com/go-gl/mathgl/mgl32"
)
const(
vertexShaderSource = `
#version 450 core
layout (location = 0) in vec3 aPos;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos, 1.0);
}` + "\x00"
fragmentShaderSource = `
#version 450 core
out vec4 FragColor;
void main()
{
FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);
}` + "\x00"
)
var(
vertices = []float32 {
-0.5,0.5,-0.5,
-0.5,-0.5,-0.5,
0.5,-0.5,-0.5,
0.5,0.5,-0.5,
-0.5,0.5,0.5,
-0.5,-0.5,0.5,
0.5,-0.5,0.5,
0.5,0.5,0.5,
0.5,0.5,-0.5,
0.5,-0.5,-0.5,
-0.5,-0.5,0.5,
-0.5,0.5,0.5,
-0.5,0.5,-0.5,
-0.5,-0.5,0.5,
0.5,-0.5,0.5,
}
indices = []int {
0,1,3, // -z
3,1,2,
5,4,7, // +z
7,6,5,
9,8,7, // +x
7,6,9,
0,1,11, // -x
11,1,10,
12,4,7, // +y
7,3,12,
13,1,14, // -y
14,1,2,
}
)
func main() {
if err := glfw.Init(); err != nil {
log.Fatalln("failed to initialize glfw:", err)
}
defer glfw.Terminate()
glfw.WindowHint(glfw.ContextVersionMajor, 4)
glfw.WindowHint(glfw.ContextVersionMinor, 5)
glfw.WindowHint(glfw.OpenGLProfile, glfw.OpenGLCoreProfile)
glfw.WindowHint(glfw.OpenGLForwardCompatible, glfw.True)
window, err := glfw.CreateWindow(800, 600, "LearnOpenGL", nil, nil)
if err != nil {
panic(err)
}
defer window.Destroy()
window.MakeContextCurrent()
// Initialize Glow
if err := gl.Init(); err != nil {
panic(err)
}
gl.Viewport(0, 0, 800, 600)
//SHADERS
vertexID, err := loadShader(vertexShaderSource, gl.VERTEX_SHADER)
if err != nil {
panic(err)
}
fragmentID, err := loadShader(fragmentShaderSource, gl.FRAGMENT_SHADER)
if err != nil {
panic(err)
}
programID := gl.CreateProgram()
gl.AttachShader(programID, vertexID)
gl.AttachShader(programID, fragmentID)
gl.DeleteShader(vertexID)
gl.DeleteShader(fragmentID)
gl.LinkProgram(programID)
// VBO / VAO / EBO data
var vbo, vao, ebo uint32
gl.GenVertexArrays(1, &vao)
gl.GenBuffers(1, &vbo)
gl.GenBuffers(1, &ebo)
gl.BindVertexArray(vao)
gl.BindBuffer(gl.ARRAY_BUFFER, vbo)
gl.BufferData(gl.ARRAY_BUFFER, len(vertices)*4, gl.Ptr(vertices), gl.STATIC_DRAW)
gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, ebo)
gl.BufferData(gl.ELEMENT_ARRAY_BUFFER, len(indices) * 4, gl.Ptr(indices), gl.STATIC_DRAW)
gl.VertexAttribPointer(0, 3, gl.FLOAT, false, 0, gl.PtrOffset(0))
gl.EnableVertexAttribArray(0)
gl.BindBuffer(gl.ARRAY_BUFFER, 0)
gl.BindVertexArray(0)
gl.Enable(gl.DEPTH_TEST)
gl.PolygonMode(gl.FRONT_AND_BACK, gl.LINE)
for !window.ShouldClose() {
gl.ClearColor(0.2, 0.3, 0.3, 1.0)
gl.Clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)
gl.UseProgram(programID)
model := mgl32.Ident4()
view := mgl32.Ident4()
projection := mgl32.Ident4()
model = model.Mul4(mgl32.HomogRotate3D(float32(glfw.GetTime()) * mgl32.DegToRad(50), mgl32.Vec3{0.5, 1, 0}))
view = view.Mul4(mgl32.Translate3D(0, 0, -3))
projection = projection.Mul4(mgl32.Perspective(mgl32.DegToRad(45), 800/float32(600), 0.1, 100))
gl.UniformMatrix4fv(gl.GetUniformLocation(programID, gl.Str("model\x00")), 1, false, &model[0])
gl.UniformMatrix4fv(gl.GetUniformLocation(programID, gl.Str("view\x00")), 1, false, &view[0])
gl.UniformMatrix4fv(gl.GetUniformLocation(programID, gl.Str("projection\x00")), 1, false, &projection[0])
gl.BindVertexArray(vao)
gl.DrawElements(gl.TRIANGLES, 36, gl.UNSIGNED_INT, gl.PtrOffset(0))
window.SwapBuffers()
glfw.PollEvents()
}
}
func loadShader(file string, shaderType uint32) (uint32, error) {
shader := gl.CreateShader(shaderType)
csources, free := gl.Strs(string(file))
gl.ShaderSource(shader, 1, csources, nil)
free()
gl.CompileShader(shader)
var status int32
gl.GetShaderiv(shader, gl.COMPILE_STATUS, &status)
if status == gl.FALSE {
var logLength int32
gl.GetShaderiv(shader, gl.INFO_LOG_LENGTH, &logLength)
log := strings.Repeat("\x00", int(logLength+1))
gl.GetShaderInfoLog(shader, logLength, nil, gl.Str(log))
return 0,fmt.Errorf("failed to compile %v: %v", file, log)
}
return shader, nil
}
HomogRotate3D creates a 3D rotation Matrix that rotates by (radian) angle about some arbitrary axis given by a Normalized Vector.
mgl32.Vec3{1, 1, 0}.Normalize()
Is apparently all you needed is normalize the axis of rotation vector and it stops the distortion. So your model rotation line:
model = model.Mul4(mgl32.HomogRotate3D(float32(glfw.GetTime()) *
mgl32.DegToRad(50.0), mgl32.Vec3{1.0, 1.0, 0.0}.Normalize()))
Have fun!
I need some assistance into why this piece of code produces a blank green window. I made this by combining examples from https://github.com/Jragonmiris/mathgl/blob/master/examples/opengl-tutorial/tutorial02/main.go and https://github.com/veandco/go-sdl2/blob/master/examples/opengl3.go. I guess i'm not sure if this is a bug with the GoLang sdl/gl framework or an issue with my OpenGL understanding. All this should draw is a cube.
My code is:
package main
import (
"fmt"
// gl "github.com/chsc/gogl/gl33"
"github.com/veandco/go-sdl2/sdl"
// "math"
"github.com/Jragonmiris/mathgl"
"github.com/go-gl/gl"
"runtime"
"time"
)
// var program gl.Program = 0
// var buffer gl.Buffer = 0
func MakeProgram(vert, frag string) gl.Program {
vertShader, fragShader := gl.CreateShader(gl.VERTEX_SHADER), gl.CreateShader(gl.FRAGMENT_SHADER)
vertShader.Source(vert)
fragShader.Source(frag)
vertShader.Compile()
fragShader.Compile()
prog := gl.CreateProgram()
prog.AttachShader(vertShader)
prog.AttachShader(fragShader)
prog.Link()
prog.Validate()
fmt.Println(prog.GetInfoLog())
return prog
}
func main() {
var window *sdl.Window
var context sdl.GLContext
var event sdl.Event
var running bool
var err error
runtime.LockOSThread()
if 0 != sdl.Init(sdl.INIT_EVERYTHING) {
panic(sdl.GetError())
}
window, err = sdl.CreateWindow(winTitle, sdl.WINDOWPOS_UNDEFINED,
sdl.WINDOWPOS_UNDEFINED,
winWidth, winHeight, sdl.WINDOW_OPENGL)
if err != nil {
panic(err)
}
if window == nil {
panic(sdl.GetError())
}
context = sdl.GL_CreateContext(window)
if context == nil {
panic(sdl.GetError())
}
if gl.Init() != 0 {
panic("gl error")
}
gl.ClearColor(1.0, 1.0, 1.0, .5)
gl.Viewport(0, 0, winWidth, winHeight)
program := MakeProgram(vertexShaderSource, fragmentShaderSource)
defer program.Delete()
matrixID := program.GetUniformLocation("MVP")
Projection := mathgl.Perspective(45.0, 4.0/3.0, 0.1, 100.0)
View := mathgl.LookAt(4.0, 3.0, 3.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0)
Model := mathgl.Ident4f()
MVP := Projection.Mul4(View).Mul4(Model)
gl.Enable(gl.DEPTH_TEST)
gl.DepthFunc(gl.LESS)
gl.Enable(gl.BLEND)
gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
vertexArray := gl.GenVertexArray()
defer vertexArray.Delete()
vertexArray.Bind()
buffer := gl.GenBuffer()
defer buffer.Delete()
buffer.Bind(gl.ARRAY_BUFFER)
gl.BufferData(gl.ARRAY_BUFFER, len(triangle_vertices)*4, &triangle_vertices, gl.STATIC_DRAW)
running = true
for running {
for event = sdl.PollEvent(); event != nil; event = sdl.PollEvent() {
switch t := event.(type) {
case *sdl.QuitEvent:
running = false
case *sdl.MouseMotionEvent:
fmt.Printf(string(t.Timestamp))
}
}
gl.Clear(gl.COLOR_BUFFER_BIT) // | gl.DEPTH_BUFFER_BIT)
program.Use()
matrixID.UniformMatrix4fv(false, MVP)
attribLoc := gl.AttribLocation(0)
attribLoc.EnableArray()
buffer.Bind(gl.ARRAY_BUFFER)
attribLoc.AttribPointer(3, gl.FLOAT, false, 0, nil)
gl.DrawArrays(gl.TRIANGLES, 0, 3)
attribLoc.DisableArray()
time.Sleep(50 * time.Millisecond)
sdl.GL_SwapWindow(window)
}
sdl.GL_DeleteContext(context)
window.Destroy()
sdl.Quit()
}
const (
winTitle = "OpenGL Shader"
winWidth = 640
winHeight = 480
vertexShaderSource = `
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main(){
gl_Position = MVP * vec4 (vertexPosition_modelspace,1.0);
}
`
fragmentShaderSource = `
#version 330 core
// Ouput data
out vec3 color;
void main()
{
// Output color = red
color = vec3(1,0,0);
}
`
)
var triangle_vertices = []float32{
-.5, -.5, -.5,
.5, -.5, -.5,
0.0, 0.5, -.5,
}
So I'm still having trouble drawing a simple shape on the screen. I made a few changes such as simplifying my shape (a triangle). I created coordinates so they would be more towards the -z axis so I would be able to see them but that has not worked. I then set the MVP matrix (moving the camera back some) just to make sure. My shaders are simple as I am only passing in a vec3 vertex position and mat4 MVP matrix so believe shaders are working correctly? Sorry for all the confusion, i think i maybe missing something here.
Update:
I also ran the version commands for opengl:
fmt.Println(gl.GetString(gl.VERSION))
fmt.Println(gl.GetString(gl.VENDOR))
fmt.Println(gl.GetString(gl.RENDERER))
for which the output was:
4.5.0 NVIDIA 347.09
NVIDIA Corporation
GeForce GTX 650 Ti/PCIe/SSE2
Not sure if this has any impact?
Update:
I have looked at some more examples and decided to try and add some sdl attributes but still no luck:
sdl.GL_SetAttribute(sdl.GL_DOUBLEBUFFER, 1)
sdl.GL_SetAttribute(sdl.GL_RED_SIZE, 8)
sdl.GL_SetAttribute(sdl.GL_GREEN_SIZE, 8)
sdl.GL_SetAttribute(sdl.GL_BLUE_SIZE, 8)
sdl.GL_SetAttribute(sdl.GL_ALPHA_SIZE, 8)
Update:
I modified this post to just include more recent code to not scare people away from TLDR.
I finally figured out what my problem was in this code.
The first thing I had to do was
positionAttrib := program.GetAttribLocation("vertexPosition_modelspace")
for all the input variables going into the vertex shader. This was done after binding the VBO for each array.
Next,
If you notice my code above:
gl.BufferData(gl.ARRAY_BUFFER, len(triangle_vertices)*4, &triangle_vertices, gl.STATIC_DRAW)
I simply replaced it with triangle_vertices array, and not the address:
gl.BufferData(gl.ARRAY_BUFFER, len(triangle_vertices)*4, triangle_vertices, gl.STATIC_DRAW)
Doing this seemed to fix it.
I would post this as a comment, but I do not yet have enough reputation.
The solution already provided nearly solved my similar issue, however not quite.
Where the provided solution was
gl.BufferData(gl.ARRAY_BUFFER, len(triangle_vertices)*4, triangle_vertices, gl.STATIC_DRAW)
The actual code which solved my issue is
gl.BufferData(gl.ARRAY_BUFFER, len(triangle_vertices)*4, gl.Ptr(triangle_vertices), gl.STATIC_DRAW)
I have also answered a similar question with this, but in more detail, which can be found here:
OpenGL Vertex Buffer doesn't draw anything in golang