When I try to draw a low opacity background over my content in Cinder, my screen flashes in red in the beginning then keeps flickering while the content is drawn.
I'm trying to replicate an effect I used in Processing/p5.js where the background isn't fully opaque so objects appear to be moving "fading":
gl::enableAlphaBlending();
gl::color( ColorA(0.0f, 0.0f, 0.0f, 0.05f) );
gl::drawSolidRect( getWindowBounds() );
I researched this could be an OpenGL issue however I'm a beginner C++/Cinder/OpenGL user so I'm not sure how to proceed.
In the end I managed to fix my issue by using:
CINDER_APP( spaintApp, RendererGl( RendererGl::Options().msaa( 4 ) ) )
Related
I want to write a Windows C++ application where the contents of the window is whatever is behind the window (as if the window is transparent). That is, I want to retrieve the bounding box of my window; capture those coordinates below, and draw them on my window. Therefore it is crucial that I can exclude the window itself during the capture.
"Why not just make the window transparent?" you ask. Because the next step for me is to make modifications to that image. I want to apply some arbitrary filters on it. For example, let's just say that I want to blur that image, so that my window looks like a frosted glass.
I tried to use the magnification API sample at https://code.msdn.microsoft.com/windowsdesktop/Magnification-API-Sample-14269fd2 which actually provides me the screen contents excluding my window. However, re-rendering the image is done in a timer, which causes a very jittery image; and I couldn't figure out how to retrieve and apply arbitrary transformations to that image.
I don't know where to start and really could use some pointers at this point. Sorry if I'm approaching this from a stupid perspective.
Edit: I am adding a mock-up of what I mean:
Edit 2: Just like in the magnification API example, view would be constantly refreshed (as frequently as possible, say every 16 ms just for argument's sake). See Visolve Deflector for an example; although it does not apply any effects on the captured region.
Again, I will be modifying the image data afterwards; therefore I cannot use the Magnification API's kernel matrix support.
You did not specify if this is a one time activity or you need a continuous stream of whats behind your window (like Magnifier/etc). And if continuous, whats the frequency of updates you may need.
Anyway in either case I see two primary use cases:
The contents behind your app are constant: You may not believe, but
most of the time the contents behind your window will not change.
The contents behind your window are changing/animating: this is a
trickier case.
Thus if you can let go the non-constant/animated background usecase, the solution is pretty simple in both one shot and continuous stream cases:
Hide your application window
Take a screenshot, and cache it!
Show your app back (crop everything apart from your application main window's bounding box), and now user can apply the filter
Even if user changes the filter, reapply that to to cached image.
Track your window's WM_MOVE/WM_SIZE and repeat above process for new dimensions.
Additionally if you need to be precise, use SetWindowsHookEx for CBT/etc.
Corner cases from top of my head:
Notify icon/Balloon tool tips
Desktop background scheduling (windows third party app)
Application specific message boxes etc!
Hope this helps!
You can start by modifying MAGCOLOREFFECT . In MagnifierSample.cpp we have:
if (ret)
{
MAGCOLOREFFECT magEffectInvert =
{{ // MagEffectInvert
{ -1.0f, 0.0f, 0.0f, 0.0f, 0.0f },
{ 0.0f, -1.0f, 0.0f, 0.0f, 0.0f },
{ 0.0f, 0.0f, -1.0f, 0.0f, 0.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f, 0.0f },
{ 1.0f, 1.0f, 1.0f, 0.0f, 1.0f }
}};
ret = MagSetColorEffect(hwndMag,&magEffectInvert);
}
Using a Color Matrix to Transform a Single Color.
For more advanced effects, you can blit contents to memory device context.
I've achieved something akin to this using the "GetForeGroundWindow" and "PrintWindow".
It's kind of involved, but here is a picture. The Image updates with it's source, but it's slow, so there is a significant lag (i.e .2 seconds=.5seconds)
Rather than a blur effect I opted for a SinWave effect. Also, using GetForeGroundWindow basically means it can only copy the contents of one window. If you want to hear more just respond and I'll put together some steps and an example repo.
I am using FreeType-gl to draw text with a background of an image. I indeed could draw them separately. However, when I draw the text after drawing the background, the background image will be overwritten by the text with a uniform background color, even I don't use glClearColor(0.0f, 0.0f, 0.0f, 1.0f) to clean before the text drawing.
I am wondering why the text drawing couldn't have a transparent background?
The problem was solved after I aware that the buffer was cleaned before the text drawing. In other words, there shouldn't be buffer cleaning (i.e., glClear(GL_COLOR_BUFFER_BIT)) between the background and text drawing, in the drawing loop.
I am porting one of my game from Windows to Linux using GLFW3 for window creation. The code runs perfectly well when I run it in Windows (using GLFW3 and opengl) but when I compile and run it in ubuntu 12.10, there is an issue in fullscreen mode (in windowed mode it runs well) where the right part (about 25%) of the frame gets stretched and goes off screen.
Here's how I am creating GLFW window:
window = glfwCreateWindow(1024, 768, "Chaos Shell", glfwGetPrimaryMonitor(), NULL);
And here's my opengl initialisation code:
glViewport(0, 0, 1024, 768);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-512.0f, 512.0f, -384.0f, 384.0f, 0.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Above code should load up the game in fullscreen mode with 1024 by 768 resolution.
When I run it, glfwCreateWindow changes the screen resolution from my current screen resolution (1366 by 768) to 1024 by 768, but the right part of the frame goes off screen. If I manually change the resolution to 1024 by 768 and then run the game, everything looks alright.
Also, running this same code in windows doesn't show any issue no matter what my current screen resolution is. It just changes the resolution to 1024 by 768 and then everything looks perfect.
If someone can find why it is acting weird in ubuntu then I will really appreciate...
You're probably running into an issue with the window manager. In short terms, the window manager didn't notice the change the change in resolution and due to the fullscreen flag expands the window to the old resolution.
Or you didn't get 1024×768 at all, because your screen doesn't support it and instead got a smaller, 16:9 resolution. So don't use hardcoded values for setting the viewport.
Honestly: You shouldn't change the screen resolution at all! Hardly anybody uses CRT displays anymore. And for displays using discrete pixels (LCDs, AMOLED, DLP projectors, LCoS projectors) it makes little sense to run them at anything else than their native resolution. So just create a fullscreen window without make the system change the resolution.
When setting the viewport query the actual window size from GLFW instead of relying on your hardcoded values (this actually could also fix your issue with a resolution change).
If you want to reduce the load on the GPU when rendering: Use a FBO to render to a texture of the desired resolution and in a last step draw that texture to a full screen quad, to stretch it up to display size. It looks better than what most screen scalers produce and your game doesn't mess with the rest of the system.
Update due to comment
Setting the screen resolution in response to the game being unable to cope with non 4:3 resolutions is very bad style. It took long enough for large game studios to adopt to wide screens. Which is unacceptable, because it's so easy to fix.
Don't cover up mistakes with forcing something on the user he might not want. And if the user has a nice display give him the opportunity to actually use it!
Your problem is not the display resolution. It's the hard coded viewport and projection setup. You need to fix that.
To fix your "game looks horrible at different resolution" issue you need to set the viewport and projection in response to the window's size. Like this:
int window_width, window_height;
glfwGetWindowSize(window, &window_width, &window_height);
if( 0 == window_width
|| 0 == window_height) {
/* window has no area, so there's nothing to draw to */
return;
}
float const window_aspect = (float)window_width / (float)window_height;
/* we want to draw with a total of 768 units vertically as if we
* were drawing to a screen 768 pixels in height. */
float const projection_scale = 768./2;
glViewport(0, 0, window_width, window_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho( -aspect * projection_scale,
aspect * projection_scale,
-projection_scale,
projection_scale,
0.0f,
1.0f );
I'm trying to put some blur around a sun, to give it a glow effect.
I'm using the method here:
http://nehe.gamedev.net/tutorial/radial_blur__rendering_to_a_texture/18004/
Basically, I draw a sphere, it takes the current buffer and grabs it as a texture and redraws it a number of times, stretching it each time.
This works great when the object is in the centre of the screen. However, I want to have it offset to say, the top right.
This is the main part I believe that needs adjusting:
glColor4f(1.0f, 1.0f, 1.0f, alpha); // Set The Alpha Value (Starts At 0.2)
glTexCoord2f(0+spost,1-spost); // Texture Coordinate ( 0, 1 )
glVertex2f(0,0); // First Vertex ( 0, 0 )
glTexCoord2f(0+spost,0+spost); // Texture Coordinate ( 0, 0 )
glVertex2f(0,480); // Second Vertex ( 0, 480 )
glTexCoord2f(1-spost,0+spost); // Texture Coordinate ( 1, 0 )
glVertex2f(640,480); // Third Vertex ( 640, 480 )
glTexCoord2f(1-spost,1-spost); // Texture Coordinate ( 1, 1 )
glVertex2f(640,0);
For the life of me though, I can't work out how to offset it each time so that the blurred images are not offset to the right. I understand that the whole screen is being captured, but there must be a way to offset this when the texture is drawn.....
How?
This maybe isn't a direct answer, to your question, but you shouldn't focus too much on effects via the fixed pipeline. NeHE tutorials are good, but a bit outdated. I recommend you just skim through the basics and incorporate shaders in your code. They're much faster and will allow you for creating much more complex effects easier.
Generally, if you want to scale around a point (sx,sy), you need to translate your world so that (sx,sy) is at the origin, do your scaling, and translate in the reverse so that (sx, sy) is back where it was originally.
Does anyone know how I can achieve the following effect in OpenGL:
Change the brightness of the rendered scene
Or implementing a Gamma setting in OpenGL
I have tried by changing the ambient parameter of the light and the type of light (directional and omnidirectional) but the result was not uniform. TIA.
Thanks for your help, some additional information:
* I can't use any windows specifics API.
* The gamma setting should not affect the whole window as I must have different gamma for different views.
On win32 you can use SetDeviceGammaRamp to adjust the overall brightness / gamma. However, this affects the entire display so it's not a good idea unless your app is fullscreen.
The portable alternative is to either draw the entire scene brighter or dimmer (which is a hassle), or to slap a fullscreen alpha-blended quad over the whole scene to brighten or darken it as desired. Neither of these approaches can affect the gamma-curve, only the overall brightness; to adjust the gamma you need grab the entire scene into a texture and then render it back to the screen via a pixel-shader that runs each texel through a gamma function.
Ok, having read the updated question, what you need is a quad with blending set up to darken or brighten everything underneath it. Eg.
if( brightness > 1 )
{
glBlendFunc( GL_DEST_COLOR, GL_ONE );
glColor3f( brightness-1, brightness-1, brightness-1 );
}
else
{
glBlendFunc( GL_ZERO, GL_SRC_COLOR );
glColor3f( brightness, brightness, brightness );
}
glEnable( GL_BLEND );
draw_quad();
http://www.gamedev.net/community/forums/topic.asp?topic_id=435400 might be an answer to your question otherwise you could probably implement a gamma correction as a pixel shader