Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I know that PaintGL() function is call at the same frequency that the screen's (let's say 60 times per second). But if no pixel is displayed on the screen (if another window hide the OpenGL one) the call to PaintGL() is no longer restrain and is called a lot much more... Which use the CPU at max and is annoying.
So, is there a way to restrain it ?
I'm using MacOS 10.9 and Qt Creator.
I don't know a lot about vsync. The fact is my software use 30% of the CPU when it's in the forground and when it's hidden, it goes up to 95%.
If you haven't enabled vsync, frames swap as often as possible (if you haven't added artificial pause). If you pushed high load on graphics card, it is likely that your program is GPU-limited (CPU have nothing to do and standing idle waiting for GPU to finish drawing).
When your program is invisible, drawing costs close to nothing because noone sees results anyway (optimisation performed by graphics driver internally).
So, answer to your question is - enable vsync. It will lock buffer swapping interval to monitor's refresh rate, so your frame rate will never rise higher than refresh rate (in fact, it will be locked to numbers 60/30/20/etc., if your monitor is at 60 Hz). This is very useful technique that e.g. eliminates screen tearing.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm trying to create a custom GUI in OpenGL from scratch in C++, but I was wondering is possible or not?
I'm getting started on some code right now, but I'm gonna stop until I get an answer.
YES.
If you play a video game, in general, every UIs should be implemented by APIs like OpenGL, DXD, Metal or Vulkan. Since a rendering surface has higher frame rate than OS UI APIs, using them together slows down the game.
Starting with making a view class as a base class, implement actual UI classes like button, table and so on inherited from the base class.
Making UIs using a GFX API is similar to making a game in terms of using same graphics techniques such as Texture Compression, Mipmap, MSAA and some special effects and so on. However, handling a font is a sort of huge part, for this reason, many game developers use a game engine/UI libraries.
https://www.twitch.tv/heroseh
Works on a Pure C + OpenGL User Interface Library daily at about 9AM(EST).
Here is their github repo for the project:
https://github.com/heroseh/vui
I myself am in the middle of stubbing in a half-assed user interface that
is just a list of clickable buttons. ( www.twitch.com/kanjicoder )
The basic idea I ran with is that both the GPU and CPU need to know about your
data. So I store all the required variables for my UI in a texture and then
sync that texture with the GPU every time it changes.
On the CPU side its a uint8 array of bytes.
On the GPU side it's unsigned 32 bit texture.
I have getters and setters both on the GPU (GLSL code) and CPU (C99) code that
manage the packing and unpacking of variables in and out of the pixels of the texture.
It's a bit crazy. But I wanted the "lowest-common-denominator" method of creating
a UI so I can easily port this to any graphics library of my choice in the future.
For example... Eventually I might want to switch from OpenGL to Vulkan. So if I keep
most of my logic as just manipulations of a big 512x512 array of pixels, I shoudn't
have too much refactoring work ahead of me.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
In my current app I need to share screens ala skype or discord, I'd prefer to not use external libs, but will if I have to.
So far I have been sending the screenshots in down-scaled bitmap form over TCP sockets and repainting the window every few milliseconds, this is of course an effort I knew was doomed from the start, is there any api that could save me?
Any help appreciated.
While I haven't implemented it myself, I believe that what's usually done is the screen is broken into 16x16 pixel blocks. You can keep the previous screenshot, take a new one, compare which blocks have changed and send only the 16x16 blocks that have changes in them.
You can further improve performance by having a change threshold. If fewer than x pixels have changed in a block, don't send yet. Or if the cumulative sum of the changes in a block (the difference between corresponding pixels) is below some threshold, don't send that block.
The blocks are also often compressed using a lossy compression scheme that really shrinks down the required size you need to send per block. The image blocks are often also sent in 4:2:2 mode, meaning you store the red and blue channels at half the resolution of the green channel. This is based on how the visual system works, but it explains why things that are pure red or pure blue sometimes get blockiness or fringing around them when screen sharing.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Recently,i want to learn a book named Tricks of 3D Games Programming Gurus.It used DDraw to implement a soft render engine.But DDraw is to old.I want to use Direct3D11 to do the same things.So i got the texture of the main backbuffer,and update it.But it didn't work,what should i do?
You don't have direct access to the true frontbuffer/backbuffer even with DirectDraw on modern platforms.
If you want to do all your rendering into a block of CPU memory without using the GPU, then your best bet for fast presentation is to use a Direct3D 11 texture with D3D11_USAGE_DYNAMIC, and then do a simple full-screen quad render of that texture onto the presentation backbuffer. For that step, you can look at DirectX Tool Kit and the SpriteBatch class.
That said, performance wise this is likely to be pretty poor because you are doing everything on the CPU and the GPU is basically doing nothing 99% of the time.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to start writing an application that can capture screen content, or capture specific full screen app content, but I am not sure where to start.
Ideally this would be written using OpenGL but I don't know the capabilities for OpenGL to capture application screen content. If I could use OpenGL to capture, let's say World of Warcraft, that would be perfect.
the capabilities for OpenGL to capture application screen content
are nonexistent. OpenGL is an API for getting things on the screen. There's exactly one function to retrieve pixels back from OpenGL (glReadPixels) and it's only asserted to work for things that have been drawn by the very OpenGL context with which that call to glReadPixels is made; and even that is highly unreliable for anything but off-screen FBOs, since the operating system is at liberty to clobber, clear or otherwise alter the main window's framebuffer contents at any time.
Note that you can find several tutorials on how to do screenshots with OpenGL scattered around the internet. And none of them works on modern computer systems, because the undefined behaviour on which those rely (all windows on a screen share one large contiguous region of the GPUs scanout framebuffer) no longer holds in modern graphics systems (ever window owns its own, independent set of framebuffers and the on-screen image is composited from those).
Capturing screen content is a highly operating system dependent task and there's no silver bullet on how to approach it. Some systems provide ready to use screen capture APIs; however depending on the performance requirements those screen capture APIs may not be the best choice. Some capture programs inject a DLL into each and every process to tap into the rendering process right at the source of the generated images. And some screen capture systems install a custom kernel driver to get access to the graphics cards video scanout buffer (which is usually mapped into system address space), bypassing the graphics card's driver to copy out the contents.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
So my problem is this: I am making a pong game and the velocity of the ball is calculated using the screen size and on my PC it works fine. When i send the game to a friend the ball seems to move extremely fast. I think the problem is in the while loop because i put a counter in it to delay the start of the game. However on other PCs it seems like the while loop is spinning so fast it disregards the counter all together and starts the game instantaneously. My PC isn't low-end by any means so i cannot figure out what the problem is.
This is well-known and well-solved problem. Simple games from the 80's suffer from this problem. They were built to redraw the screen as fast as your computer would allow, and now (assuming you can get them to run) they run unplayably fast. The speed at which your game runs should not be determined by how fast your computer can execute a while loop, or your game will never play the same on two computers.
Games have solved this problem for decades now by scaling the advancement of the game-state by the frame-rate of the computer currently running the game.
The first thing you need to do in your while loop is calculate the elapsed time since the last iteration of your loop, this will be some tiny fraction of a second. Your game state needs to advance by that much time, and only that much time.
In very simple terms, if you're moving your ball using something like this...
ball_x += ball_horizontal_momentum
ball_y += ball_vertical_momentum
You would need to modify each momentum by a scaling factor determined by how much time has passed:
ball_x += ball_horizontal_momentum * elapsed_time
ball_x += ball_vertial_momentum * elapsed_time
So, on a very slow computer your ball might wind up jumping 100 pixels each frame. On a computer which is 10 times faster, your ball would move 10 pixels each frame. The result is that on both computers, the ball will appear to move the exact same speed.
All of your animations need to be scaled in this way.