Issue with a shader and texture coordinates - c++

I have a shader I'm attempting to use and I've come across an issue that i can't solve since my knowledge of glsl is limited.
I'm using a texture as a mask and to debug this issue I simply use this textures pixel color as the gl_FragColor, I'll post some images to show what it looks like and what it should look like.
Image link;
https://imgur.com/EBt2vbL
It seems related to the coordinates from gl_TexCoord[0].xy not getting the proper coordinates of the dissolve texture
main.cpp
#include "Engine.h"
#include <stdio.h>
#include <iostream>
#include <windows.h>
int main(int argc, char *argv[])
{
try
{
Engine game;
game.Run();
}
catch (std::exception& err)
{
std::cout << "\nException: " << err.what() << std::endl;
}
return 0;
}
Engine.h
#pragma once
#include <SFML/System.hpp>
#include <SFML/Graphics.hpp>
#include <SFML/Window.hpp>
#include <SFML/Audio.hpp>
#include <SFML/Network.hpp>
#include <vector>
#include <iostream>
class Engine
{
public:
Engine();
void Run();
void HandleEvents(sf::Time deltaTime);
void Update(sf::Time deltaTime);
void BuildVertices();
void Draw();
private:
bool running;
bool hasFocus;
bool fullScreen;
sf::RenderWindow mainWindow;
sf::Time deltaTime;
sf::Event event;
sf::Vector2i screenResolution;
sf::Vector2i mousePosition;
sf::VertexArray vertices;
sf::Vertex vertex;
sf::Shader dissolveShader;
sf::Texture dissolveTexture;
sf::RenderStates renderState;
float dissolveValue;
sf::Texture objectSpriteSheetTexture;
};
Engine.cpp
#include "Engine.h"
static const sf::Time TimePerFrame = sf::seconds(1.f / 60.f);
Engine::Engine()
: hasFocus(true)
, fullScreen(fullScreen)
, running(false)
, dissolveValue(1.0f)
, vertices(sf::Quads)
{
mainWindow.create(sf::VideoMode(640, 480), "Test", sf::Style::Titlebar);
mainWindow.setPosition(sf::Vector2i(0, 0));
screenResolution.x = 640;
screenResolution.y = 480;
// 512x512 sheet, each sprite is 128x128
if (!objectSpriteSheetTexture.loadFromFile("ObjectSheet.png"))
std::cout << "failed to load ObjectSheet.png" << std::endl;
if (!dissolveTexture.loadFromFile("DissolveTexture.png"))
std::cout << "failed to load DissolveTexture.png" << std::endl;
if (!dissolveShader.loadFromFile("DissolveShader.frag", sf::Shader::Fragment))
{
std::cout << "failed to load DissolveShader.frag" << std::endl;
}
dissolveShader.setUniform("sourceTexture", sf::Shader::CurrentTexture);
dissolveShader.setUniform("dissolveTexture", dissolveTexture);
renderState.shader = &dissolveShader;
renderState.texture = &objectSpriteSheetTexture;
}
void Engine::Run()
{
// main loop
sf::Clock clock;
sf::Time timeSinceLastUpdate = sf::Time::Zero;
sf::Time elapsedTime;
running = true;
while(running)
{
elapsedTime = clock.restart();
timeSinceLastUpdate += elapsedTime;
HandleEvents(TimePerFrame);
while(timeSinceLastUpdate > TimePerFrame)
{
timeSinceLastUpdate -= TimePerFrame;
Update(TimePerFrame);
}
BuildVertices();
Draw();
}
}
void Engine::HandleEvents(sf::Time deltaTime)
{
mousePosition = sf::Mouse::getPosition(mainWindow);
while(mainWindow.pollEvent(event))
{
if(event.type == sf::Event::Closed)
mainWindow.close();
if (event.type == sf::Event::KeyPressed)
{
if (event.key.code == sf::Keyboard::Escape)
{
running = false;
}
}
}
}
void Engine::Update(sf::Time deltaTime)
{
}
void Engine::BuildVertices()
{
vertices.clear();
int frameSize = 128;
sf::Vector2i objectPosition(100, 100);
sf::Vector2i spriteSheetTextureCoordinates(0, 128);
vertex.position.x = objectPosition.x;
vertex.position.y = objectPosition.y;
vertex.texCoords.x = spriteSheetTextureCoordinates.x;
vertex.texCoords.y = spriteSheetTextureCoordinates.y;
vertices.append(vertex);
vertex.position.x = objectPosition.x + frameSize;
vertex.position.y = objectPosition.y;
vertex.texCoords.x = spriteSheetTextureCoordinates.x + frameSize;
vertex.texCoords.y = spriteSheetTextureCoordinates.y;
vertices.append(vertex);
vertex.position.x = objectPosition.x + frameSize;
vertex.position.y = objectPosition.y + frameSize;
vertex.texCoords.x = spriteSheetTextureCoordinates.x + frameSize;
vertex.texCoords.y = spriteSheetTextureCoordinates.y + frameSize;
vertices.append(vertex);
vertex.position.x = objectPosition.x;
vertex.position.y = objectPosition.y + frameSize;
vertex.texCoords.x = spriteSheetTextureCoordinates.x;
vertex.texCoords.y = spriteSheetTextureCoordinates.y + frameSize;
vertices.append(vertex);
}
void Engine::Draw()
{
mainWindow.clear(sf::Color::Black);
dissolveShader.setUniform("dissolveValue", dissolveValue);
mainWindow.draw(vertices, renderState);
mainWindow.display();
}
the vertex shader is a standard pass through handled by sfml.
the fragment shader;
#version 130
// used as the mask to determine if a pixel of the source texture should be drawn, 128x128
uniform sampler2D dissolveTexture;
// the texture of the object i'm drawing, a 128x128 part of a 512x512 sprite sheet
uniform sampler2D sourceTexture;
// set to 1.0 for debug
uniform float dissolveValue;
void main( void )
{
vec4 sourceColor = texture2D(sourceTexture, gl_TexCoord[0].xy);
vec4 maskColor = texture2D(dissolveTexture, gl_TexCoord[0].xy);
if(maskColor.r <= dissolveValue)
{
// it would return the source pixel color here one the issue is solved
// gl_FragColor = sourceColor;
// debuging, so returning the mask textures pixel color
gl_FragColor = maskColor;
}
else
{
gl_FragColor = sourceColor;
}
}
I'm probably overlooking something simple, so if someone can point me in the right direction i'd appreciate it, thanks!

The texture coordinates, for the GLSL function texture (formerly texture2D) range from 0.0 to 1.0 where (0.0, 0.0) is in general the bottom-left corner and (1.0, 1.0) is the top-right corner of the texture image.
But, the SFML library scales the texture cooridnates by the size of the curren t texture (sf::Shader::CurrentTexture). This means the texture coordinates have to be set up in the range of the current texture size:
This means you have to set up the texture coordinates like this:
void Engine::BuildVertices()
{
vertices.clear();
int frameSize = 128;
sf::Vector2i objectPosition(100, 100);
sf::Vector2i texSize(512, 512);
vertex.position = sf::Vector2f(objectPosition.x, objectPosition.y);
vertex.texCoords = sf::Vector2f(0.0f, 0.0f);
vertices.append(vertex);
vertex.position = sf::Vector2f(objectPosition.x + frameSize, objectPosition.y);
vertex.texCoords = sf::Vector2f(texSize.x, 0.0f);
vertices.append(vertex);
vertex.position = sf::Vector2f(objectPosition.x + frameSize, objectPosition.y + frameSize);
vertex.texCoords = sf::Vector2f(texSize.x, texSize.y);
vertices.append(vertex);
vertex.position = sf::Vector2f(objectPosition.x, objectPosition.y + frameSize);
vertex.texCoords = sf::Vector2f(0.0f, texSize.y);
vertices.append(vertex);
}
You have a mask texture with the size of 128*128, and you have tiled sprite (4*4 tiles) with the size of 512*512. I recommend to add a texture coordinate offset uniform (texOffset) and a texture scale uniform (texScale) to the fragment shader, with allows to select a tile of the texture:
#version 130
uniform sampler2D dissolveTexture;
uniform sampler2D sourceTexture;
uniform float dissolveValue;
uniform vec2 texScale;
uniform vec2 texOffset;
void main( void )
{
vec4 sourceColor = texture2D(sourceTexture, texOffset+texScale*gl_TexCoord[0].xy);
vec4 maskColor = texture2D(dissolveTexture, gl_TexCoord[0].xy);
gl_FragColor = mix( sourceColor, maskColor, step(maskColor.r, dissolveValue) );
}
You have to set the uniforms in the function Draw. The scale is given by the reciprocal of the number of tile rows and colums. The offset is the index of the tile multiplied by the scale factor:
void Engine::Draw()
{
mainWindow.clear(sf::Color::Black);
dissolveValue = 0.5f;
dissolveShader.setUniform("dissolveValue", dissolveValue);
float scale_x = 1.0f/4.0f;
float scale_y = 1.0f/4.0f;
int i_x = 1; // column of tile (form 0 to 3)
int i_y = 2; // row of tile (form 0 to 3)
dissolveShader.setUniform("texScale", sf::Glsl::Vec2(scale_x, scale_y));
dissolveShader.setUniform("texOffset", sf::Glsl::Vec2(i_x*scale_x, i_y*scale_y));
mainWindow.draw(vertices, renderState);
mainWindow.display();
}

Related

How to draw triangles in Qt6 using QOpenGLWidget?

Referencing off of a 7 year old question: How do I render a triangle in QOpenGLWidget?
The accepted answer here gives a very detailed explanation of how to setup an example, but as numerous comments on that answer state (some years later), there are parts of the sample that are deprecated or are no longer best-practice.
Can anyone explain how to do this now, in Qt6+ without using glBegin/glEnd and without using GLU?
I ultimately need to be able to build a GUI around an OpenGL context, with the OpenGL being able to render 3D models as a wireframe, without any kind of shaders or textures mapped onto it.
I tried to work from the cube example. I was able to add GUI elements, but they render on top of the OpenGL window instead of above or around it and I am unsure of how to change the code to fix that. I was able to feed in a 3D geometry from file and get it to plot that, but it maps the cube.png texture from the example onto anything I plot and I haven't been able to get it to render a wireframe instead of a texture.
Edit 4: I guess I'll call this solved at this point. Referencing this thread, I learned you can add other widgets besides the central widget, just not normal widgets, they have to be dock widgets for some reason (as far as I can tell). I have updated the code below to reflect this image, which is a 'working' solution to the questions that I asked here. Huge thanks to user 'new QOpenGLWidget' for all of their help!
main.cpp
#include <QApplication>
#include <QLabel>
#include <QSurfaceFormat>
#ifndef QT_NO_OPENGL
#include "mainwidget.h"
#endif
#include "geometryengine.h"
#include "storedGeometry.h"
extern "C" {
// this fortran function is called by cpp
void rk_viz_f90(const char *geoname, int str_len=0); // length is optional, default 0, pass by value
// this cpp function is called by fortran
void send_facet(float in[][3])
{
gUseGeom.addFacet(GeometryEngine::facetData(QVector3D(in[0][0],in[0][1],in[0][2]),QVector3D(in[1][0],in[1][1],in[1][2]),QVector3D(in[2][0],in[2][1],in[2][2])));
}
}
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QSurfaceFormat format;
format.setDepthBufferSize(24);
QSurfaceFormat::setDefaultFormat(format);
app.setApplicationName("cube");
app.setApplicationVersion("0.1");
// Call Fortran Rk_Viz Lib version
std::string geofile = "C:\\TEMP\\qt\\demo_send_arrays\\sphere_6in_PW.RawRkViz.bin";
printf("C++ filename %s\n",geofile.c_str());
const char * geoname = geofile.c_str();
rk_viz_f90(geoname,geofile.size());
#ifndef QT_NO_OPENGL
MainWindow window;
window.setFixedSize(600,800);
window.show();
#else
QLabel note("OpenGL Support required");
note.show();
#endif
return app.exec();
}
mainwindow.h - newly added
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
#include "vizglwidget.h"
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
MainWindow(QWidget *parent = nullptr);
~MainWindow();
public slots:
private:
VizGlWidget *glWidget; // pointer to vizglwidget
QPushButton *loadButton;
void setupGui();
};
#endif // MAINWINDOW_H
mainwindow.cpp - newly added
#include "mainwindow.h"
#include <QGroupBox>
#include <QGridLayout>
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
{
setWindowTitle("cube");
setupGui();
}
MainWindow::~MainWindow()
{
}
void MainWindow::setupGui()
{
// Try docking widgets with GL as central widget
glWidget = new VizGlWidget();
setCentralWidget(glWidget);
setStatusBar(new QStatusBar(this));
QDockWidget* dock1 = new QDockWidget;
this->addDockWidget(Qt::TopDockWidgetArea, dock1);
dock1->setMinimumSize(800,200);
QGridLayout *layout = new QGridLayout;
loadButton = new QPushButton(QString("Load Bin File..."),this);
layout->addWidget(loadButton,0,0,1,1,Qt::AlignHCenter);
dock1->setLayout(layout);
}
vizglwidget.h - formerly mainwidget.h
#ifndef VIZGLWIDGET_H
#define VIZGLWIDGET_H
#include "geometryengine.h"
#include "storedGeometry.h"
#include <QOpenGLWidget>
#include <QOpenGLFunctions>
#include <QMatrix4x4>
#include <QQuaternion>
#include <QVector2D>
#include <QBasicTimer>
#include <QOpenGLShaderProgram>
#include <QOpenGLTexture>
#include <QPushButton>
class GeometryEngine;
class VizGlWidget : public QOpenGLWidget, protected QOpenGLFunctions
{
Q_OBJECT
public:
using QOpenGLWidget::QOpenGLWidget;
~VizGlWidget();
protected:
void mousePressEvent(QMouseEvent *e) override;
void mouseReleaseEvent(QMouseEvent *e) override;
void timerEvent(QTimerEvent *e) override;
void initializeGL() override;
void resizeGL(int w, int h) override;
void paintGL() override;
void initShaders();
void initTextures();
private:
std::vector<GeometryEngine::facetData> *pUseGeom = nullptr;
QBasicTimer timer;
QOpenGLShaderProgram program;
GeometryEngine *geometries = nullptr;
QOpenGLTexture *texture = nullptr;
QMatrix4x4 projection;
QVector2D mousePressPosition;
QVector3D rotationAxis;
qreal angularSpeed = 0;
QQuaternion rotation;
};
#endif // VIZGLWIDGET_H
vizglwidget.cpp - formerly mainwidget.cpp
#include "vizglwidget.h"
#include <QMouseEvent>
#include <cmath>
VizGlWidget::~VizGlWidget()
{
// Make sure the context is current when deleting the texture
// and the buffers.
makeCurrent();
delete texture;
delete geometries;
doneCurrent();
}
void VizGlWidget::mousePressEvent(QMouseEvent *e)
{
// Save mouse press position
mousePressPosition = QVector2D(e->position());
}
void VizGlWidget::mouseReleaseEvent(QMouseEvent *e)
{
// Mouse release position - mouse press position
QVector2D diff = QVector2D(e->position()) - mousePressPosition;
// Rotation axis is perpendicular to the mouse position difference
// vector
QVector3D n = QVector3D(diff.y(), diff.x(), 0.0).normalized();
// Accelerate angular speed relative to the length of the mouse sweep
qreal acc = diff.length() / 100.0;
// Calculate new rotation axis as weighted sum
rotationAxis = (rotationAxis * angularSpeed + n * acc).normalized();
// Increase angular speed
angularSpeed += acc;
}
void VizGlWidget::timerEvent(QTimerEvent *)
{
// Decrease angular speed (friction)
angularSpeed *= 0.99;
// Stop rotation when speed goes below threshold
if (angularSpeed < 0.01) {
angularSpeed = 0.0;
} else {
// Update rotation
rotation = QQuaternion::fromAxisAndAngle(rotationAxis, angularSpeed) * rotation;
// Request an update
update();
}
}
void VizGlWidget::initializeGL()
{
initializeOpenGLFunctions();
glClearColor(0, 0, 0, 1);
initShaders();
initTextures();
// Enable depth buffer
glEnable(GL_DEPTH_TEST);
// Enable back face culling
//glEnable(GL_CULL_FACE);
geometries = new GeometryEngine();
// Use QBasicTimer because its faster than QTimer
timer.start(12, this);
}
void VizGlWidget::initShaders()
{
// Compile vertex shader
if (!program.addShaderFromSourceFile(QOpenGLShader::Vertex, ":/vshader.glsl"))
close();
// Compile fragment shader
if (!program.addShaderFromSourceFile(QOpenGLShader::Fragment, ":/fshader.glsl"))
close();
// Link shader pipeline
if (!program.link())
close();
// Bind shader pipeline for use
if (!program.bind())
close();
}
void VizGlWidget::initTextures()
{
// Load cube.png image
texture = new QOpenGLTexture(QImage(":/cube.png").mirrored());
// Set nearest filtering mode for texture minification
texture->setMinificationFilter(QOpenGLTexture::Nearest);
// Set bilinear filtering mode for texture magnification
texture->setMagnificationFilter(QOpenGLTexture::Linear);
// Wrap texture coordinates by repeating
// f.ex. texture coordinate (1.1, 1.2) is same as (0.1, 0.2)
texture->setWrapMode(QOpenGLTexture::Repeat);
}
void VizGlWidget::resizeGL(int w, int h)
{
// Calculate aspect ratio
qreal aspect = qreal(w) / qreal(h ? h : 1);
// Set near plane to 3.0, far plane to 7.0, field of view 45 degrees
//const qreal zNear = 3.0, zFar = 7.0, fov = 45.0;
const qreal zNear = 0.1, zFar = 10.0, fov = 30.0;
// Reset projection
projection.setToIdentity();
// Set perspective projection
projection.perspective(fov, aspect, zNear, zFar);
}
void VizGlWidget::paintGL()
{
// Clear color and depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
texture->bind();
// Calculate model view transformation
QMatrix4x4 matrix;
matrix.translate(0.0, 0.0, -1);
matrix.rotate(rotation);
// Set modelview-projection matrix
program.setUniformValue("mvp_matrix", projection * matrix);
// Use texture unit 0 which contains cube.png
program.setUniformValue("texture", 0);
// Draw cube geometry
geometries->drawCubeGeometry(&program);
}
geometryengine.h
#ifndef GEOMETRYENGINE_H
#define GEOMETRYENGINE_H
#include <QOpenGLFunctions>
#include <QOpenGLShaderProgram>
#include <QOpenGLBuffer>
class GeometryEngine : protected QOpenGLFunctions
{
public:
struct facetData
{
QVector3D v1;
QVector3D v2;
QVector3D v3;
facetData() {
};
facetData(QVector3D iv1, QVector3D iv2, QVector3D iv3) {
v1 = iv1;
v2 = iv2;
v3 = iv3;
};
~facetData() {
v1.~QVector3D();
v2.~QVector3D();
v3.~QVector3D();
};
};
GeometryEngine();
virtual ~GeometryEngine();
void drawCubeGeometry(QOpenGLShaderProgram *program);
private:
void initCubeGeometry();
QOpenGLBuffer arrayBuf;
QOpenGLBuffer indexBuf;
};
#endif // GEOMETRYENGINE_H
geometryengine.cpp
#include "geometryengine.h"
#include "storedGeometry.h"
#include <QVector2D>
#include <QVector3D>
#include <algorithm>
GeometryEngine::GeometryEngine()
: indexBuf(QOpenGLBuffer::IndexBuffer)
{
initializeOpenGLFunctions();
// Generate 2 VBOs
arrayBuf.create();
indexBuf.create();
// Initializes cube geometry and transfers it to VBOs
initCubeGeometry();
}
GeometryEngine::~GeometryEngine()
{
arrayBuf.destroy();
indexBuf.destroy();
}
void GeometryEngine::initCubeGeometry()
{
// Get a copy of the geometry to reference here
std::vector<GeometryEngine::facetData> tGeom = gUseGeom.getGeom();
// Convert vector to array
GeometryEngine::facetData* aGeom = tGeom.data();
// Get a copy of the generated indices to reference here
std::vector<GLushort> tInd = gUseGeom.getGenIndices();
// Convert vector to array
GLushort* aInd = tInd.data();
// Transfer vertex data to VBO 0
arrayBuf.bind();
arrayBuf.allocate(aGeom, tGeom.size() * sizeof(GeometryEngine::facetData));
// Transfer index data to VBO 1
indexBuf.bind();
indexBuf.allocate(aInd, tInd.size() * sizeof(GLushort));
}
void GeometryEngine::drawCubeGeometry(QOpenGLShaderProgram *program)
{
// Tell OpenGL which VBOs to use
arrayBuf.bind();
indexBuf.bind();
// Tell OpenGL programmable pipeline how to locate vertex position data
int vertexLocation = program->attributeLocation("a_position");
program->enableAttributeArray(vertexLocation);
// setAttributeBuffer(int location, GLenum type, int offset, int tupleSize, int stride = 0)
program->setAttributeBuffer(vertexLocation, GL_FLOAT, 0, 3);
// Tell OpenGL programmable pipeline how to locate vertex texture coordinate data
int texcoordLocation = program->attributeLocation("a_texcoord");
program->enableAttributeArray(texcoordLocation);
// original: program->setAttributeBuffer(texcoordLocation, GL_FLOAT, offset, 2, sizeof(VertexData));
program->setAttributeBuffer(texcoordLocation, GL_FLOAT, 0, 3);
// Draw cube geometry using indices from VBO 1
glPolygonMode(GL_FRONT_AND_BACK,GL_LINE);
glDrawElements(GL_TRIANGLES, gUseGeom.gSize() * 3, GL_UNSIGNED_SHORT, nullptr);
}
storedgeometry.h
#ifndef STOREDGEOMETRY_H
#define STOREDGEOMETRY_H
#include "geometryengine.h"
class storedGeometry
{
private:
std::vector<GeometryEngine::facetData> useGeom;
std::vector<std::vector<GLushort>> useInd;
std::vector<GLushort> genInd;
public:
// Constructor/Destructor
storedGeometry();
~storedGeometry();
// Setters
void setGeom(std::vector<GeometryEngine::facetData> inGeom);
void addFacet(GeometryEngine::facetData inFacet);
void setIndices(std::vector<std::vector<GLushort>> inInd);
void addIndices(std::vector<GLushort> inInd);
// Getters
std::vector<GeometryEngine::facetData> getGeom();
GeometryEngine::facetData getFacet(int pos);
int gSize();
int iSize();
std::vector<std::vector<GLushort>> getUseIndices();
std::vector<GLushort> getGenIndices();
std::vector<GLushort> getInd(int pos);
};
extern storedGeometry gUseGeom;
#endif // STOREDGEOMETRY_H
storedgeometry.cpp
#include "storedGeometry.h"
// Constructor
storedGeometry::storedGeometry()
{
std::vector<GeometryEngine::facetData> useGeom;
std::vector<GLushort> useInd;
std::vector<GLushort> genInd;
}
// Destructor
storedGeometry::~storedGeometry()
{
useGeom.clear();
useInd.clear();
genInd.clear();
}
// Setters
void storedGeometry::setGeom(std::vector<GeometryEngine::facetData> inGeom) {
useGeom = inGeom;
}
void storedGeometry::addFacet(GeometryEngine::facetData inFacet) {
useGeom.push_back(inFacet);
// also want to generate indices to go with this at the same time
// can take in indices from rkviz, but are not useful for this purpose
if (genInd.empty()) {
// case 1 - currently no indices, add 0, 1, 2
genInd.push_back(0);
genInd.push_back(1);
genInd.push_back(2);
} else {
// case 2 - already has indices, add n+1, n+1, n+2, n+3, n+3, where n is previous entry
GLushort tInd = genInd[genInd.size()-1];
genInd.push_back(tInd+1);
genInd.push_back(tInd+2);
genInd.push_back(tInd+3);
}
}
void storedGeometry::setIndices(std::vector<std::vector<GLushort>> inInd) {
useInd = inInd;
}
void storedGeometry::addIndices(std::vector<GLushort> inInd) {
useInd.push_back(inInd);
}
// Getters
std::vector<GeometryEngine::facetData> storedGeometry::getGeom() {
return useGeom;
}
GeometryEngine::facetData storedGeometry::getFacet(int pos) {
if (pos <= useGeom.size()) {
return useGeom[pos];
} else {
return useGeom[useGeom.size()];
}
}
int storedGeometry::gSize() {
return useGeom.size();
}
int storedGeometry::iSize() {
return useInd.size();
}
std::vector<std::vector<GLushort>> storedGeometry::getUseIndices() {
return useInd;
}
std::vector<GLushort> storedGeometry::getGenIndices() {
return genInd;
}
std::vector<GLushort> storedGeometry::getInd(int pos) {
if (pos <= useInd.size()) {
return useInd[pos];
} else {
return useInd[useInd.size()];
}
}
storedGeometry gUseGeom;
fshader.glsl
#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif
// From example:
//uniform sampler2D texture;
//varying vec2 v_texcoord;
void main()
{
// Set fragment color from texture
//original: gl_FragColor = texture2D(texture, v_texcoord);
// Set fragment color to fixed color
gl_FragColor = vec4(1.0f,0.0f,0.0f,1.0f);
}
vshader.glsl
#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif
uniform mat4 mvp_matrix;
attribute vec4 a_position;
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
void main()
{
// Calculate vertex position in screen space
gl_Position = mvp_matrix * a_position;
// Pass texture coordinate to fragment shader
// Value will be automatically interpolated to fragments inside polygon faces
v_texcoord = a_texcoord;
}
For the GUIs, don't use QOpenGLWidget for them. If you do that it will automatically render the GUIs on top of the OpenGL stuff, because QOpenGLWidget forces the OpenGL window to appear in the entire screen. To fix this, add a wrapper class that extends QMainWindow to put both the MainWidget and the GUIs.
For the wireframe, try putting this code before calling glDrawElements:
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
More clarification on the wireframe:
Remove the texture and replace it with a uniform color like red:
gl_FragColor = vec4(1.0f, 0.0f, 0.0f, 1.0f);

Gross line drawing

I created some mountains and a grid-floor on a synthwave style using OpenGL. The only effect created on post-process is slight bloom, but something is wrong with lines:
Lines quickly begin to be drawn very poorly when i look further. sometimes they are not drawn , or just a piece. These lines are even drawn differently if i rotate the camera.
And it get worse the more i stick the camera to the floor:
I tried several things : disable anti-alisiasing of my GC (NVidia), set every filter texture to GL_LINEAR instead of GL_NEAREST, a 32 bits precision of depth buffer instead of 24. None of them worked.
What could be wrong?
Here's the code , i tried to remove as much code as i could
The init function:
void initBase(int argc, char* argv[]) {
YLogConsole::createInstance();
glutInit(&argc, argv);
glutSetOption(
GLUT_ACTION_ON_WINDOW_CLOSE,
GLUT_ACTION_GLUTMAINLOOP_RETURNS
);
glutInitWindowSize(BaseWidth, BaseHeight);
glutInitWindowPosition(0, 0);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
YLog::log(YLog::ENGINE_INFO, (toString(argc) + " arguments en ligne de commande.").c_str());
FullScreen = false;
for (int i = 0; i<argc; i++)
{
if (argv[i][0] == 'f')
{
YLog::log(YLog::ENGINE_INFO, "Arg f mode fullscreen.\n");
FullScreen = true;
}
}
MainWindowId = glutCreateWindow("Yocto");
glutReshapeWindow(BaseWidth, BaseHeight);
setFullScreen(FullScreen);
if (MainWindowId < 1)
{
YLog::log(YLog::ENGINE_ERROR, "Erreur creation de la fenetre.");
exit(EXIT_FAILURE);
}
GLenum glewInitResult = glewInit();
if (glewInitResult != GLEW_OK)
{
YLog::log(YLog::ENGINE_ERROR, ("Erreur init glew " + std::string((char*)glewGetErrorString(glewInitResult))).c_str());
exit(EXIT_FAILURE);
}
//Affichage des capacités du système
YLog::log(YLog::ENGINE_INFO, ("OpenGL Version : " + std::string((char*)glGetString(GL_VERSION))).c_str());
glutDisplayFunc(updateBase);
glutReshapeFunc(resizeBase);
glutKeyboardFunc(keyboardDown);
glutKeyboardUpFunc(keyboardUp);
glutSpecialFunc(specialDown);
glutSpecialUpFunc(specialUp);
glutMouseFunc(mouseClick);
glutMotionFunc(mouseMoveActive);
glutPassiveMotionFunc(mouseMovePassive);
glutIgnoreKeyRepeat(1);
//Initialisation du YRenderer
Renderer = YRenderer::getInstance();
Renderer->setRenderObjectFun(renderObjectsBase);
Renderer->setRender2DFun(render2dBase);
Renderer->setBackgroundColor(YColor());
Renderer->initialise(&TimerGPURender);
//On applique la config du YRenderer
glViewport(0, 0, Renderer->ScreenWidth, Renderer->ScreenHeight);
Renderer->resize(Renderer->ScreenWidth, Renderer->ScreenHeight);
//Ecrans de jeu
ScreenManager = new GUIScreenManager();
uint16 x = 10;
uint16 y = 10;
ScreenJeu = new GUIScreen();
ScreenStats = new GUIScreen();
//Bouton pour afficher les params
GUIBouton * btn = new GUIBouton();
btn->Titre = std::string("Params");
btn->X = x;
btn->Y = y;
btn->setOnClick(clickBtnParams);
ScreenJeu->addElement(btn);
y += btn->Height + 5;
btn = new GUIBouton();
btn->Titre = std::string("Stats");
btn->X = x;
btn->Y = y;
btn->setOnClick(clickBtnStats);
ScreenJeu->addElement(btn);
y += btn->Height + 1;
//Ecran de stats
y = btn->Height + 15;
LblFps = new GUILabel();
LblFps->Text = "FPS";
LblFps->X = x;
LblFps->Y = y;
LblFps->Visible = true;
ScreenStats->addElement(LblFps);
//Ecran de parametrage
x = 10;
y = 10;
ScreenParams = new GUIScreen();
GUIBouton * btnClose = new GUIBouton();
btnClose->Titre = std::string("Close");
btnClose->X = x;
btnClose->Y = y;
btnClose->setOnClick(clickBtnClose);
ScreenParams->addElement(btnClose);
ScreenStats->addElement(btnClose);
//Ecran a rendre
ScreenManager->setActiveScreen(ScreenJeu);
//Init YCamera
Renderer->Camera->setPosition(YVec3f(320, 320, 320));
Renderer->Camera->setLookAt(YVec3f(0, 0, 0));
Renderer->Camera->setProjectionPerspective(Instance->Fov,
(float)Instance->Renderer->ScreenWidth / (float)Instance->Renderer->ScreenHeight,
Instance->NearPlane, Instance->FarPlane);
//Init YTimer
Timer = new YTimer();
//Chargement des shaders
Instance->loadShaders();
//Init pour classe fille
init();
//On start le temps
Timer->start();
YLog::log(YLog::ENGINE_INFO, "[ Yocto initialized ]\nPress : \n - f to toggle fullscreen\n - F1 for png screen shot\n - F5 to hot-reload shaders");
}
The main loop:
void SynthEngine::renderObjects()
{
Renderer->updateMatricesFromOgl();
glUseProgram(shaderWorld);
Renderer->sendMatricesToShader(shaderWorld);
dec->getGround()->render();
}
UpdateMatriceFromOgl:
void updateMatricesFromOgl() {
float matMvTab[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matMvTab);
memcpy(MatMV.Mat.t, matMvTab, 16 * sizeof(float));
MatMV.transpose();
float matProjTab[16];
glGetFloatv(GL_PROJECTION_MATRIX, matProjTab);
memcpy(MatP.Mat.t, matProjTab, 16 * sizeof(float));
MatP.transpose();
MatMVP = MatP;
MatMVP *= MatMV;
MatV.createViewMatrix(Camera->Position, Camera->LookAt, Camera->UpVec);
MatIV = MatV;
MatIV.invert();
MatM = MatIV;
MatM *= MatMV;
MatIM = MatM;
MatIM.invert();
MatNorm = MatM;
MatNorm.invert();
MatNorm.transpose();
MatIP = MatP;
MatIP.invert();
}
The render function (VBO) , textureIndex and textureCubeIndex are always 0:
void YVbo::render(GBuffer * inBuffer) {
//La stat globales
YRenderer::NbVBOFacesRendered += NbVertices / 3;
if (textureIndex)
{
glBindTexture(GL_TEXTURE_2D, textureIndex);
}
if (textureCubeIndex)
{
glBindTexture(GL_TEXTURE_CUBE_MAP, textureCubeIndex);
}
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
for (int i = 0; i<NbElements; i++)
glEnableVertexAttribArray(i);
if (StorageMethod == PACK_BY_ELEMENT_TYPE) {
for (int i = 0; i<NbElements; i++)
glVertexAttribPointer(i, Elements[i].NbFloats, GL_FLOAT, GL_FALSE, 0, (void*)(Elements[i].OffsetFloats * sizeof(float)));
} else {
for (int i = 0; i<NbElements; i++)
glVertexAttribPointer(i, Elements[i].NbFloats, GL_FLOAT, GL_FALSE, TotalNbFloatForOneVertice * sizeof(float), (void*)(Elements[i].OffsetFloats * sizeof(float)));
}
YEngine::Instance->TimerGPURender.startAccumPeriod();
glDrawArrays(GL_TRIANGLES, 0, NbVertices);
YEngine::Instance->TimerGPURender.endAccumPeriod();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glBindTexture(GL_TEXTURE_2D, 0);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
}
vertex shader of ShaderWorld :
#version 400
uniform mat4 mvp;
uniform float elapsed;
layout(location = 0) in vec3 position_in;
layout(location = 1) in vec4 color_border_in;
layout(location = 2) in vec4 color_fill_in;
out VertexAttrib
{
vec4 colorFill;
vec4 colorBorder;
} vertex;
void main()
{
gl_Position = mvp * vec4(position_in, 1);
vertex.colorBorder = color_border_in;
vertex.colorFill = color_fill_in;
}
geometry shader
#version 400
out vec4 color_border;
out vec4 color_fill;
out vec3 bary;
in VertexAttrib
{
vec4 colorFill;
vec4 colorBorder;
} vertex[];
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
void main()
{
for (int i = 0; i < 3; i++)
{
color_border = vertex[i].colorBorder;
color_fill = vertex[i].colorFill;
gl_Position = gl_in[i].gl_Position;
if (i == 0)
bary = vec3(0, 0, 1);
if (i == 1)
bary = vec3(0, 1, 0);
if (i == 2)
bary = vec3(1, 0, 0);
EmitVertex();
}
EndPrimitive();
}
fragment shader :
#version 400
in vec4 color_border;
in vec4 color_fill;
in vec3 bary;
layout (location = 0) out vec4 color;
layout (location = 1) out vec4 passColor;
float toleranceLight = 0.7;
void main()
{
vec4 interColor;
if ((bary.x) < 0.01 || (bary.y) < 0.01 || ((bary.z) < 0.01 && color_border.r == 0))
{
interColor = color_border;
}
else
{
interColor = color_fill;
}
if (max(interColor.r,max(interColor.g, interColor.b)) > toleranceLight)
{
passColor = interColor;
}
else
{
passColor = vec4(0,0,0,1);
}
color = interColor;
}
Your main issue here is the perspective interpolation of vec3 bary and its boolean nature causing artifacts around the edges of color_border and color_fill. Consider some sort of interpolation of between the edge and fill color based on the bary interpolant.
Alternatively, you can consider mapping a mask texture indicating edges vs. fill. You'll need ensure that you generate mipmaps and use it at runtime with an anisotropic filter.
On a separate note, you don't need a geometry shader at all in this case. Just use gl_VertexID % 3 directly from the vertex shader and output bary from there.

How to change vertices in QOpenGLWidget dynamically?

I use QOpenGLWidget to derive my own widget rendered OpenGL. I want to set vertices data later or changed them whenever possible instead of in initializeGL(). Trying to draw a triangle by giving vertices in initializedGL() is ok. But I want to call updateModel() function externally to changed model whenever I want.
In initializeGL(), commented lines are standard use of QOpenGLWidget. It works well. When I call updateModel() to changed vertices, it can't draw anything.
What's wrong? Thanks.
#include "openglwindow.h"
#include <QDebug>
#include <QOpenGLShader>
#include <QOpenGLShaderProgram>
#include <QDebug>
#include <QMouseEvent>
#include <QWheelEvent>
#include <QVector3D>
#include <QTimer>
OpenGLWidget::OpenGLWidget(QWidget *parent):
QOpenGLWidget(parent),
m_vshader(0),
m_fshader(0),
m_program(0),
m_mousePressed(false),
m_eye(QVector3D(0,0,10)),
m_center(QVector3D(0,0,0)),
m_up(QVector3D(0,1,0)),
m_verticalAngle(45.f){
m_modelUpdated = false;
setMinimumSize(300,300);
//matrix initialization
m_model.setToIdentity();
m_view.lookAt(m_eye,m_center,m_up),
m_projection.perspective(m_verticalAngle,aspectRatio(),0.01f,100.0f);
m_timer = new QTimer(this);
m_timer->setInterval(10);
connect(m_timer,&QTimer::timeout,this,QOverload<>::of(&QWidget::update));
}
OpenGLWidget::~OpenGLWidget()
{
// makeCurrent();
// delete m_program;
// delete m_vshader;
// delete m_fshader;
// m_vbo.destroy();
// m_vao.destroy();
// doneCurrent();
}
void OpenGLWidget::initializeGL()
{
qDebug()<<"initializeGL()";
initializeOpenGLFunctions();
glEnable(GL_DEPTH_TEST);
glClearColor(.2f,.3f,.3f,1.0f);
//Initialized program shader
m_vshader = new QOpenGLShader(QOpenGLShader::Vertex);
const char * vcode =
"#version 330 core \
layout (location = 0) in vec3 aPos;\
layout (location = 1) in vec3 aNor; \
uniform mat4 model; \
uniform mat4 view; \
uniform mat4 projection;\
out vec3 Normal;\
out vec3 FragPos;\
void main() \
{ \
gl_Position = projection*view*model*vec4(aPos,1.0);\
Normal = aNor;\
FragPos = vec3(model*vec4(aPos,1.0));\
}";
m_vshader->compileSourceCode(vcode);
m_fshader = new QOpenGLShader(QOpenGLShader::Fragment);
const char * fcode =
"#version 330 core \
out vec4 FragColor; \
in vec3 Normal;\
in vec3 FragPos;\
void main()\
{ \
FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\
} ";
m_fshader->compileSourceCode(fcode);
m_program = new QOpenGLShaderProgram();
m_program->addShader(m_vshader);
m_program->addShader(m_fshader);
m_program->link();
m_program->bind();
m_attriPos = m_program->attributeLocation("aPos");
// m_modelAttriLocation = m_program->attributeLocation("model");
// m_viewAttriLocation = m_program->attributeLocation("view");
// m_projectAttriLocation = m_program->attributeLocation("projection");
// m_vertices<<QVector3D(-0.5f,-0.5f,0.0f)
// <<QVector3D(0.5f,-0.5f,0.0f)
// <<QVector3D(0.0f,0.5f,0.0f)
// <<QVector3D(-0.5f,0.5f,0.0f)
// <<QVector3D(0.5f,0.5f,0.0f)
// <<QVector3D(0.0f,-0.5f,0.0f);
//create VAO
m_vao.create();
m_vao.bind();
//create VBO
m_vbo.create();
qDebug()<<m_vbo.bind();
m_vbo.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo.allocate(m_vertices.constData(),m_vertices.count()*3*sizeof(float));
m_program->enableAttributeArray(0);
m_program->setAttributeBuffer(0,GL_FLOAT,0,3);
//m_vbo.release();
//m_vao.release();
m_program->release();
}
void OpenGLWidget::resizeGL(int w, int h){
//Updating m_projection matrix here
m_projection.setToIdentity();
m_projection.perspective(45.0f,w/float(h),0.01f,100.0f);
}
void OpenGLWidget::paintGL(){
//glClearColor(.2f,.3f,.3f,1.0f);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
// m_program->bind();
// {
// m_program->setUniformValue("projection",m_projection);
// m_program->setUniformValue("view",m_view);
// m_program->setUniformValue("model",m_model);
// m_vao.bind();
// glDrawArrays(GL_TRIANGLES,0,3);
// m_vao.release();
// }
// m_program->release();
paintModel();
}
void OpenGLWidget::mousePressEvent(QMouseEvent *event)
{
m_mousePressed = true;
m_prevXPos = event->x();
m_prevYPos = event->y();
}
void OpenGLWidget::mouseMoveEvent(QMouseEvent *event)
{
if(m_mousePressed == true){
//update matrix here
int deltaX = event->x()-m_prevXPos;
int deltaY = event->y()-m_prevXPos;
updateCameraVectors(deltaX,deltaY);
}
}
void OpenGLWidget::mouseReleaseEvent(QMouseEvent *event)
{
m_mousePressed = false;
}
void OpenGLWidget::wheelEvent(QWheelEvent *event)
{
updateCameraVectors(0,0,event->angleDelta().y());
}
float OpenGLWidget::aspectRatio()
{
return width()/static_cast<float>(height());
}
void OpenGLWidget::paintModel()
{
m_program->bind();
{
m_vao.bind();
m_program->setUniformValue("projection",m_projection);
m_program->setUniformValue("view",m_view);
m_program->setUniformValue("model",m_model);
glDrawArrays(GL_TRIANGLES,0,m_vertices.count());
m_vao.release();
}
m_program->release();
}
void OpenGLWidget::updateModel(const QVector<QVector3D> &model)
{
m_vertices = model;
if(isValid() == false)
return;
makeCurrent();
m_vbo.destroy();
m_vao.destroy();
if(m_vbo.isCreated() == false){
m_vao.create();
m_vao.bind();
m_vbo.create();
m_vbo.bind();
m_vbo.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo.allocate(m_vertices.constData(),m_vertices.count()*3*sizeof(float));
for(GLenum err;(err = glGetError())!=GL_NO_ERROR;){
qDebug()<<"error:"<<err;
}
m_vbo.release();
m_vao.release();
}
doneCurrent();
}
When the VBO/VAO get deleted and recreated, you also have to redo the VAO setup, namely
m_program->enableAttributeArray(0);
m_program->setAttributeBuffer(0,GL_FLOAT,0,3);
You might be confused because these functions are called on the shader program object, but they are actually part of the VAO state.
Note, that deleting and generating the VAO/VBO again in the updateModel is not really necessary. If you want to change the vertices, it is sufficient to call m_vbo.allocate with the new data.

How to use shaders more efficiently in c++/sfml?

I'm working on a sort of ascii canvas for a game. I assumed it would be more efficient to to use a spritesheet of ascii glyphs in cp437 style to draw the ascii art. I needed a way to color the background and foreground of the glyphs so I'm using a fragment shader. Using the shader drops me to 7 fps. Not using the shader, I get about 131.
Am I doing something incorrectly? Is it just too expensive to loop through a vector of strings (and each character in the string), calculate the position of the glyph on the sheet, set the texture position, and then draw the sprite with a shader for each character?
int main() {
sf::RenderWindow rt(sf::VideoMode(1280, 720), "Demo Game");
sf::Texture texture;
texture.loadFromFile("Resources/courier_8x16.png");
texture.setSmooth(false);
sf::Sprite sprite(texture);
sf::Shader shader;
shader.loadFromFile("cycle.frag", sf::Shader::Fragment);
shader.setUniform("texture", sf::Shader::CurrentTexture);
sf::Clock clock;
sf::Time timeSinceLastUpdate = sf::Time::Zero;
std::vector<std::string> chars = std::vector<std::string>(50, std::string(100, 'X'));
while (rt.isOpen())
{
processEvents();
timeSinceLastUpdate += clock.restart();
while (timeSinceLastUpdate > TimePerFrame)
{
timeSinceLastUpdate -= TimePerFrame;
processEvents();
update(TimePerFrame);
}
//render();
rt.clear();
for (int y = 0; y < chars.size(); y++)
{
for (int x = 0; x < chars[y].size(); x++)
{
//bg and fg colors will be in a 2D vector and used here
shader.setUniform("foreground", sf::Glsl::Vec4(std::Color.White));
shader.setUniform("background", sf::Glsl::Vec4(std::Color.Black));
//uses decimal value of char and dimensions
//to find location of appropriate glyph in sprite sheet
sprite.setTextureRect(sf::IntRect((chars[y][x] % 16) * 8, (chars[y][x] / 16) * 16, 8, 16));
sprite.setPosition(x * 8, y * 16);
rt.draw(sprite, &shader);
}
}
rt.display();
}
}
Here's the shader code:
//////cycle.frag
uniform vec4 foreground;
uniform vec4 background;
uniform sampler2D texture;
void main()
{
vec4 pixel = texture2D(texture, gl_TexCoord[0].xy);
if (pixel.r < .1)
pixel = background;
else
pixel = foreground;
gl_FragColor = pixel;
}
Here's the sprite sheet: (all the paragraph symbols are just placeholders)

Billboarding C++

I got a code from my teacher that currently shows a 3D globe and a 2D particle system. The camera moves around in circles. The particle system is supposed to face the camera.
According to my lecture notes, I have to multiply the billboard with the inverse of the camera's view matrix. I would love to try that but I have trouble using the variable for the view matrix.
#include "pch.h"
#include <Kore/Application.h>
#include <Kore/IO/FileReader.h>
#include <Kore/Math/Core.h>
#include <Kore/Math/Random.h>
#include <Kore/System.h>
#include <Kore/Input/Keyboard.h>
#include <Kore/Input/Mouse.h>
#include <Kore/Audio/Mixer.h>
#include <Kore/Graphics/Image.h>
#include <Kore/Graphics/Graphics.h>
#include <Kore/Log.h>
#include "ObjLoader.h"
#include "Collision.h"
#include "PhysicsWorld.h"
#include "PhysicsObject.h"
using namespace Kore;
// A simple particle implementation
class Particle {
public:
VertexBuffer* vb;
IndexBuffer* ib;
mat4 M;
// The current position
vec3 position;
// The current velocity
vec3 velocity;
// The remaining time to live
float timeToLive;
// The total time time to live
float totalTimeToLive;
// Is the particle dead (= ready to be re-spawned?)
bool dead;
void init(const VertexStructure& structure) {
vb = new VertexBuffer(4, structure,0);
float* vertices = vb->lock();
SetVertex(vertices, 0, -1, -1, 0, 0, 0);
SetVertex(vertices, 1, -1, 1, 0, 0, 1);
SetVertex(vertices, 2, 1, 1, 0, 1, 1);
SetVertex(vertices, 3, 1, -1, 0, 1, 0);
vb->unlock();
// Set index buffer
ib = new IndexBuffer(6);
int* indices = ib->lock();
indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 0;
indices[4] = 2;
indices[5] = 3;
ib->unlock();
dead = true;
}
void Emit(vec3 pos, vec3 velocity, float timeToLive) {
position = pos;
this->velocity = velocity;
dead = false;
this->timeToLive = timeToLive;
totalTimeToLive = timeToLive;
}
Particle() {
}
void SetVertex(float* vertices, int index, float x, float y, float z, float u, float v) {
vertices[index* 8 + 0] = x;
vertices[index*8 + 1] = y;
vertices[index*8 + 2] = z;
vertices[index*8 + 3] = u;
vertices[index*8 + 4] = v;
vertices[index*8 + 5] = 0.0f;
vertices[index*8 + 6] = 0.0f;
vertices[index*8 + 7] = -1.0f;
}
void render(TextureUnit tex, Texture* image) {
Graphics::setTexture(tex, image);
Graphics::setVertexBuffer(*vb);
Graphics::setIndexBuffer(*ib);
Graphics::drawIndexedVertices();
}
void Integrate(float deltaTime) {
timeToLive -= deltaTime;
if (timeToLive < 0.0f) {
dead = true;
}
// Note: We are using no forces or gravity at the moment.
position += velocity * deltaTime;
// Build the matrix
M = mat4::Translation(position.x(), position.y(), position.z()) * mat4::Scale(0.2f, 0.2f, 0.2f);
}
};
class ParticleSystem {
public:
// The center of the particle system
vec3 position;
// The minimum coordinates of the emitter box
vec3 emitMin;
// The maximal coordinates of the emitter box
vec3 emitMax;
// The list of particles
Particle* particles;
// The number of particles
int numParticles;
// The spawn rate
float spawnRate;
// When should the next particle be spawned?
float nextSpawn;
ParticleSystem(int maxParticles, const VertexStructure& structure ) {
particles = new Particle[maxParticles];
numParticles = maxParticles;
for (int i = 0; i < maxParticles; i++) {
particles[i].init(structure);
}
spawnRate = 0.05f;
nextSpawn = spawnRate;
position = vec3(0.5f, 1.3f, 0.5f);
float b = 0.1f;
emitMin = position + vec3(-b, -b, -b);
emitMax = position + vec3(b, b, b);
}
void update(float deltaTime) {
// Do we need to spawn a particle?
nextSpawn -= deltaTime;
bool spawnParticle = false;
if (nextSpawn < 0) {
spawnParticle = true;
nextSpawn = spawnRate;
}
for (int i = 0; i < numParticles; i++) {
if (particles[i].dead) {
if (spawnParticle) {
EmitParticle(i);
spawnParticle = false;
}
}
particles[i].Integrate(deltaTime);
}
}
void render(TextureUnit tex, Texture* image, ConstantLocation mLocation, mat4 V) {
Graphics::setBlendingMode(BlendingOperation::SourceAlpha, BlendingOperation::InverseSourceAlpha);
Graphics::setRenderState(RenderState::DepthWrite, false);
/************************************************************************/
/* Exercise 7 1.1 */
/************************************************************************/
/* Change the matrix V in such a way that the billboards are oriented towards the camera */
/************************************************************************/
/* Exercise 7 1.2 */
/************************************************************************/
/* Animate using at least one new control parameter */
for (int i = 0; i < numParticles; i++) {
// Skip dead particles
if (particles[i].dead) continue;
Graphics::setMatrix(mLocation, particles[i].M * V);
particles[i].render(tex, image);
}
Graphics::setRenderState(RenderState::DepthWrite, true);
}
float getRandom(float minValue, float maxValue) {
int randMax = 1000000;
int randInt = Random::get(0, randMax);
float r = (float) randInt / (float) randMax;
return minValue + r * (maxValue - minValue);
}
void EmitParticle(int index) {
// Calculate a random position inside the box
float x = getRandom(emitMin.x(), emitMax.x());
float y = getRandom(emitMin.y(), emitMax.y());
float z = getRandom(emitMin.z(), emitMax.z());
vec3 pos;
pos.set(x, y, z);
vec3 velocity(0, 0.3f, 0);
particles[index].Emit(pos, velocity, 3.0f);
}
};
namespace {
const int width = 1024;
const int height = 768;
double startTime;
Shader* vertexShader;
Shader* fragmentShader;
Program* program;
float angle = 0.0f;
// null terminated array of MeshObject pointers
MeshObject* objects[] = { nullptr, nullptr, nullptr, nullptr, nullptr, nullptr };
// null terminated array of PhysicsObject pointers
PhysicsObject* physicsObjects[] = { nullptr, nullptr, nullptr, nullptr, nullptr, nullptr };
// The view projection matrix aka the camera
mat4 P;
mat4 View;
mat4 PV;
vec3 cameraPosition;
MeshObject* sphere;
PhysicsObject* po;
PhysicsWorld physics;
// uniform locations - add more as you see fit
TextureUnit tex;
ConstantLocation pvLocation;
ConstantLocation mLocation;
ConstantLocation tintLocation;
Texture* particleImage;
ParticleSystem* particleSystem;
double lastTime;
void update() {
double t = System::time() - startTime;
double deltaT = t - lastTime;
//Kore::log(Info, "%f\n", deltaT);
lastTime = t;
Kore::Audio::update();
Graphics::begin();
Graphics::clear(Graphics::ClearColorFlag | Graphics::ClearDepthFlag, 0xff9999FF, 1000.0f);
Graphics::setFloat4(tintLocation, vec4(1, 1, 1, 1));
program->set();
angle += 0.3f * deltaT;
float x = 0 + 3 * Kore::cos(angle);
float z = 0 + 3 * Kore::sin(angle);
cameraPosition.set(x, 2, z);
//PV = mat4::Perspective(60, (float)width / (float)height, 0.1f, 100) * mat4::lookAt(vec3(0, 2, -3), vec3(0, 2, 0), vec3(0, 1, 0));
P = mat4::Perspective(60, (float)width / (float)height, 0.1f, 100);
View = mat4::lookAt(vec3(x, 2, z), vec3(0, 2, 0), vec3(0, 1, 0));
PV = P * View;
Graphics::setMatrix(pvLocation, PV);
// iterate the MeshObjects
MeshObject** current = &objects[0];
while (*current != nullptr) {
// set the model matrix
Graphics::setMatrix(mLocation, (*current)->M);
(*current)->render(tex);
++current;
}
// Update the physics
physics.Update(deltaT);
PhysicsObject** currentP = &physics.physicsObjects[0];
while (*currentP != nullptr) {
(*currentP)->UpdateMatrix();
Graphics::setMatrix(mLocation, (*currentP)->Mesh->M);
(*currentP)->Mesh->render(tex);
++currentP;
}
particleSystem->update(deltaT);
particleSystem->render(tex, particleImage, mLocation, View);
Graphics::end();
Graphics::swapBuffers();
}
void SpawnSphere(vec3 Position, vec3 Velocity) {
PhysicsObject* po = new PhysicsObject();
po->SetPosition(Position);
po->Velocity = Velocity;
po->Collider.radius = 0.2f;
po->Mass = 5;
po->Mesh = sphere;
// The impulse should carry the object forward
// Use the inverse of the view matrix
po->ApplyImpulse(Velocity);
physics.AddObject(po);
}
void keyDown(KeyCode code, wchar_t character) {
if (code == Key_Space) {
// The impulse should carry the object forward
// Use the inverse of the view matrix
vec4 impulse(0, 0.4, 2, 0);
mat4 viewI = View;
viewI.Invert();
impulse = viewI * impulse;
vec3 impulse3(impulse.x(), impulse.y(), impulse.z());
SpawnSphere(cameraPosition + impulse3 *0.2f, impulse3);
}
}
void keyUp(KeyCode code, wchar_t character) {
if (code == Key_Left) {
// ...
}
}
void mouseMove(int x, int y, int movementX, int movementY) {
}
void mousePress(int button, int x, int y) {
}
void mouseRelease(int button, int x, int y) {
}
void init() {
FileReader vs("shader.vert");
FileReader fs("shader.frag");
vertexShader = new Shader(vs.readAll(), vs.size(), VertexShader);
fragmentShader = new Shader(fs.readAll(), fs.size(), FragmentShader);
// This defines the structure of your Vertex Buffer
VertexStructure structure;
structure.add("pos", Float3VertexData);
structure.add("tex", Float2VertexData);
structure.add("nor", Float3VertexData);
program = new Program;
program->setVertexShader(vertexShader);
program->setFragmentShader(fragmentShader);
program->link(structure);
tex = program->getTextureUnit("tex");
pvLocation = program->getConstantLocation("PV");
mLocation = program->getConstantLocation("M");
tintLocation = program->getConstantLocation("tint");
objects[0] = new MeshObject("Base.obj", "Level/basicTiles6x6.png", structure);
objects[0]->M = mat4::Translation(0.0f, 1.0f, 0.0f);
sphere = new MeshObject("ball_at_origin.obj", "Level/unshaded.png", structure);
SpawnSphere(vec3(0, 2, 0), vec3(0, 0, 0));
Graphics::setRenderState(DepthTest, true);
Graphics::setRenderState(DepthTestCompare, ZCompareLess);
Graphics::setTextureAddressing(tex, U, Repeat);
Graphics::setTextureAddressing(tex, V, Repeat);
particleImage = new Texture("SuperParticle.png", true);
particleSystem = new ParticleSystem(100, structure);
}
}
int kore(int argc, char** argv) {
Application* app = new Application(argc, argv, width, height, 0, false, "Exercise7");
init();
app->setCallback(update);
startTime = System::time();
lastTime = 0.0f;
Kore::Mixer::init();
Kore::Audio::init();
Keyboard::the()->KeyDown = keyDown;
Keyboard::the()->KeyUp = keyUp;
Mouse::the()->Move = mouseMove;
Mouse::the()->Press = mousePress;
Mouse::the()->Release = mouseRelease;
app->start();
delete app;
return 0;
}
There's a comment where the teacher wants us to add the code.
The variable for the view matrix "View" is in "namespace". I've only ever used namespace as a library but this one doesn't have a name. So how do I use it?
The comment says that we should use matrix V. So I just add V = Inverse View Matrix * Model Matrix to the code and it removes the rotation?
I'm sorry for the stupid questions, it's supposed to be a class for beginners but it's really anything but. The lecture notes aren't very helpful when it comes to the programming part and I only found tutorials for OpenGL or Unity or Direct X and where not using any of it.
Please help me, I need to hand this in until Saturday morning and I've already spent the last two days trying out code and I've got nothing so far!
You can find the whole thing here: https://github.com/TUDGameTechnology/Exercise7
You don't have to do anything special to access an unnamed namespace. This thread explains more.
You are most probably trying to reference View within methods that cannot see your namespace because of the order in which they are defined in your file.
This line in your update method:
particleSystem->render(tex, particleImage, mLocation, View);
is already passing View into the render method.
void render(TextureUnit tex, Texture* image, ConstantLocation mLocation, mat4 V)
That means that in this case mat4 v is your camera view.