GLSL fault: syntax error,unexpected IDENTIFIER,expecting RIGHT_PAREN - glsl

I am trying to use glslangvalidator to compile my shaders ,then it tells me this :
Error :deferred.frag:954: "":syntax error,unexpected IDENTIFIER,expecting RIGHT_PAREN.
Here is where goes wrong:
#define FXaaFloat float
#define FxaaFloat4 float4
#if(FXAA_GREEN_AS_LUMA==0)
FxaaFloat FxaaLuma(FxaaFloat4 rgba)
{
return 0.299*rgba.x+0.587*rgba.y+0.114*rgba.z;
}
#else
FxaaFloat FxaaLuma(FxaaFloat4 rgba)
{
return rgba.y
}
#endif
here is the definition of the FxaaFloat:

Related

C++ Unresolved External Symbols Embedded Lua ( based on longjmp issue ) ( not duplicate )

I will describe the problem as follows:
Compiler: Visual Studio 2019
The root of the problem is that longjump crashes the process because I manually map my code to the process.
The code works fine as follows, but crashes on any syntax error in the lua script due to longjump:
extern "C" {
#include "lua.h"
#include "lualib.h"
.....
}
I want C++ exceptions originating from:
#if defined(__cplusplus) && !defined(LUA_USE_LONGJMP) /* { */
/* C++ exceptions */
#define LUAI_THROW(L,c) throw(c)
#define LUAI_TRY(L,c,a) \
try { a } catch(...) { if ((c)->status == 0) (c)->status = -1; }
#define luai_jmpbuf int /* dummy variable */
#elif defined(LUA_USE_POSIX) /* }{ */
/* in POSIX, try _longjmp/_setjmp (more efficient) */
#define LUAI_THROW(L,c) _longjmp((c)->b, 1)
#define LUAI_TRY(L,c,a) if (_setjmp((c)->b) == 0) { a }
#define luai_jmpbuf jmp_buf
#else /* }{ */
/* ISO C handling with long jumps */
#define LUAI_THROW(L,c) longjmp((c)->b, 1)
#define LUAI_TRY(L,c,a) if (setjmp((c)->b) == 0) { a }
#define luai_jmpbuf jmp_buf
#endif /* } */
Because longjmp crashes my process.
So I decided to compile my code with the C++ compiler (without extern C) and:
#include "lua.h"
#include "lualib.h"
.....
This is how I called it. But this also led to the following problem:
error LNK2019: unresolved external symbol _lua_pcall
...
...
...
I thought a lot about it but couldn't find a solution.It's ridiculous that it's a linker error because all the lua header and c files are joined to my project.
#define LUAI_THROW(L,c) c->throwed = true
#define LUAI_TRY(L,c,a) \
__try { a } __except(filter()) { if ((c)->status == 0 && ((c)->throwed)) (c)->status = -1; }
#define luai_jmpbuf int /* dummy variable */
Btw, i solved my throwing exception issue like that. Im not sure its correct or no but not crashing anymore.
struct lua_longjmp {
struct lua_longjmp *previous;
luai_jmpbuf b;
volatile int status; /* error code */
bool throwed;
};
Works as expected even you build without C++ exceptions

Macro function identifier not found

I created an assert marco that works fine but wanted to make one to break. I copied the assert one and renamed it but not it doesn't work when i use it in one header file but it works everywhere else.
I'm getting this error:
Severity Code Description Project File Line Suppression State
Error C3861 'RADIANT_TEST_BREAK': identifier not found Sandbox F:\Path\Buffer.h 63
Both macros are in the same file, Core.h which is included wherever i need it.
#define RADIANT_CORE_ASSERT(x, ...) { if(!(x)) { RADIANT_CORE_ERROR("Assertion Failed: {0}", __VA_ARGS__); __debugbreak(); } }
#define RADIANT_TEST_BREAK(x, ...) { if(!(x)) { RADIANT_CORE_ERROR("Assertion Failed: {0}", __VA_ARGS__); __debugbreak(); } }
I have no idea why the break one doesn't work in that one place
here is the method i am running it from
struct BufferElement
{
std::string Name;
ShaderDataType Type;
uint32_t Size;
uint32_t Offset;
bool Normalized;
BufferElement(ShaderDataType type, const std::string& name, bool normalized = false)
: Name(name), Type(type), Size(ShaderDataTypeSize(type)), Offset(0), Normalized(normalized)
{
}
uint32_t GetComponentCount() const
{
switch (Type)
{
case ShaderDataType::Float: return 1;
case ShaderDataType::Float2: return 2;
}
RADIANT_TEST_BREAK(false, "Unknown ShaderDataType!");
return 0;
}
};
If i swap RADIANT_TEST_BREAK for RADIANT_CORE_ASSERT It works.
I found what was causing the problem.
i had the definition in an ifdef but not in the else
#ifdef RADIANT_ENABLE_ASSERTS
#define RADIANT_CORE_ASSERT(x, ...) { if(!(x)) { RADIANT_CORE_ERROR("Assertion Failed: {0}", __VA_ARGS__); __debugbreak(); } }
#define RADIANT_TEST_BREAK(x, ...) { if(!(x)) { RADIANT_CORE_ERROR("Assertion Failed: {0}", __VA_ARGS__); __debugbreak(); } }
#else
#define RADIANT_CORE_ASSERT(x, ...)
//Make sure to include this line
#define RADIANT_TEST_BREAK(x, ...)
#endif
This seems to only be a problem in one place which i still don't know why but this fixed it for me.

Error GLSL incorrect version 450

I have a certain OpenGL application which I compiled in the past but now can't in the same machine. The problem seems to be in the fragment shader not compiling properly.
I'm using:
Glew 2.1.0
Glfw 3.2.1
Also all necessary context is being created on the beginning of the program. Here's how my program creation function looks like:
std::string vSource, fSource;
try
{
vSource = getSource(vertexShader, "vert");
fSource = getSource(fragmentShader, "frag");
}
catch (std::runtime_error& e)
{
std::cout << e.what() << std::endl;
}
GLuint vsID, fsID;
try
{
vsID = compileShader(vSource.c_str(), GL_VERTEX_SHADER); //Source char* was checked and looking good
fsID = compileShader(fSource.c_str(), GL_FRAGMENT_SHADER);
}
catch (std::runtime_error& e)
{
std::cout << e.what() << std::endl; //incorrect glsl version 450 thrown here
}
GLuint programID;
try
{
programID = createProgram(vsID, fsID); //Debugging fails here
}
catch (std::runtime_error& e)
{
std::cout << e.what() << std::endl;
}
glDeleteShader(vsID);
glDeleteShader(fsID);
return programID;
My main:
/* ---------------------------- */
/* OPENGL CONTEXT SET WITH GLEW */
/* ---------------------------- */
static bool contextFlag = initializer::createContext(vmath::uvec2(1280, 720), "mWs", window);
std::thread* checkerThread = new std::thread(initializer::checkContext, contextFlag);
/* --------------------------------- */
/* STATIC STATE SINGLETON DEFINITION */
/* --------------------------------- */
Playing Playing::playingState; //Failing comes from here which tries to create a program
/* ---- */
/* MAIN */
/* ---- */
int main(int argc, char** argv)
{
checkerThread->join();
delete checkerThread;
Application* app = new Application();
...
return 0;
}
Here is the looking of an example of the fragmentShader file:
#version 450 core
out vec4 fColor;
void main()
{
fColor = vec4(0.5, 0.4, 0.8, 1.0);
}
And this is what I catch as errors:
[Engine] Glew initialized! Using version: 2.1.0
[CheckerThread] Glew state flagged as correct! Proceeding to mainthread!
Error compiling shader: ERROR: 0:1: '' : incorrect GLSL version: 450
ERROR: 0:7: 'fColor' : undeclared identifier
ERROR: 0:7: 'assign' : cannot convert from 'const 4-component vector of float' to 'float'
My specs are the following:
Intel HD 4000
Nvidia GeForce 840M
I shall state that I compiled shaders in this same machine before. I can't do it anymore after a disk format. However, every driver is updated.
As stated in comments the problem seemed to be with a faulty option of running the IDE with selected graphics card. As windows defaults the integrated Intel HD 4000 card, switching the NVIDIA card to the default preferred one by the OS fixed the problem.

‘GL_VERTIEX_SHADER’ was not declared in this scope - when it is <--It was a typo.

This is really weird,
line 1752 of glew.h:
#define GL_VERTEX_SHADER 0x8B31
Under the GL_VERSION_2_0 header guard
I have this code:
GLenum err = glewInit();
if(GLEW_OK != err) {
::std::cout<<"Error: "<<glewGetErrorString(err)<<"\n";
}
//GLuint shader = glCreateShader(GL_VERTIEX_SHADER); <--FAILS
GLuint shader = glCreateShader(0x8b31); <--WORKS
::std::cout<<"Shader: "<<shader<<"\n"<<"Errorstr: "<<
glewGetErrorString(glGetError())<<"\n";
#ifdef GL_VERSION_2_0
::std::cout<<"OKAY I have 2.0\n";
#endif
::std::cout<<glGetString(GL_VERSION)<<"\n";
Output:
Shader: 1
Errorstr: No error
OKAY I have 2.0
4.4.0 NVIDIA 331.38
If I use GL_VERTEX_SHADER however I get a symbol not found, weirdly my IDE can't find it either.
I've just noticed, I actually spelled "VERTEX" wrong. It works now. I feel really silly.
It took a title to make me see that though

c++ macro concatation not worked under gcc

#include <iostream>
void LOG_TRACE() { std::cout << "reach here"; }
#define LOG_LL_TRACE LOG_TRACE
#define LL_TRACE 0
#define __LOG(level) LOG_##level()
#define LOG(level) __LOG(##level)
int main()
{
LOG(LL_TRACE);
return 0;
}
The code is worked under Visual Studio, but report: test.cpp:13:1: error: pasting "(" and "LL_TRACE" does not give a valid preprocessing token.
How can I fix it?
ps: The macro expansion is supposed to be LOG(LL_TRACE) --> __LOG(LL_TRACE) --> LOG_LL_TRACE().
ps: suppose LL_TRACE must have a 0 value, do not remove it.
Two things make this code not compile on g++:
First, the error you're quoting is because you want to have this:
#define LOG(level) __LOG(level)
Notice no ##. Those hashmarks mean concatenate, but you're not concatenating anything. Just forwarding an argument.
The second error is that you have to remove
#define LL_TRACE 0
This line means you end up calling LOG(0) which expands into LOG_0 which isn't defined.
Shouldn't it be :
#define LOG(level) __LOG(level)
That works:
#include <iostream>
void LOG_TRACE() { std::cout << "reach here"; }
#define LOG_LL_TRACE LOG_TRACE
#define __LOG( level ) LOG_##level()
#define LOG(level) __LOG(level)
int main()
{
LOG( LL_TRACE );
return 0;
}