Drawing pixel by pixel in C++ [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Some years ago, I used to program for MS-DOS in assembly language. One the things available was to tell the BIOS an x coordinate, a y coordinate and a color (expressed as an integer) then call a function and the BIOS would do it immediately.
Of course, this is very hard work and very time consuming but the trade off was that you got exactly what you want exactly at the time you wanted it.
I tried for many years to write to MacOS API, but found it either difficult or impossible as nothing is documented at all. (What the hell is an NSNumber? Why do all the controls return a useless object?)
I don't really have any specific project in mind right now, but I would like to be able to write C++ that can draw pixels much in the same way. Maybe I'm crazy but I want that kind of control.
Until I can overcome this, I'm limited to writing programs that run in the console by printing text and scrolling up as the screen gets full.

You could try using Windows GDI:
#include <windows.h>
int main()
{
HDC hdc = GetDC(GetConsoleWindow());
for (int x = 0; x < 256; ++x)
for (int y = 0; y < 256; ++y)
SetPixel(hdc, x, y, RGB(127, x, y));
}
It is pretty easy to get something drawn (if this is what you are asking) as you could see from the above example.

Modern x86 operating systems do not work anymore under real mode.
You have several options:
Run a VM and install a real mode OS (e.g. MS-DOS).
Use a layer that emulates the real mode (e.g. DOSBox).
Use a GUI library (e.g. Qt, GTK, wxWidgets, Win32, X11) and use a canvas or a similar control where you can draw.
Use a 2D API (e.g. the 2D components of SDL, SFML, Allegro).
Use a 3D API (e.g. OpenGL, Direct3D, Vulkan, Metal; possibly exposed by SDL, SFML or Allegro if you want it portable) to stream a texture that you have filled pixel by pixel with the CPU each frame.
Write fragment shaders (either using a 3D API or, much easier, in an web app using WebGL).
If you want to learn how graphics are really done nowadays, you should go with the last 2 options.
Note that, if you liked drawing "pixel by pixel", you will probably love writing fragment shaders directly on the GPU and all the amazing effects you can achieve with them. See ShaderToy for some examples!

Related

Creating a GUI in OpenGL, is it possible? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm trying to create a custom GUI in OpenGL from scratch in C++, but I was wondering is possible or not?
I'm getting started on some code right now, but I'm gonna stop until I get an answer.
YES.
If you play a video game, in general, every UIs should be implemented by APIs like OpenGL, DXD, Metal or Vulkan. Since a rendering surface has higher frame rate than OS UI APIs, using them together slows down the game.
Starting with making a view class as a base class, implement actual UI classes like button, table and so on inherited from the base class.
Making UIs using a GFX API is similar to making a game in terms of using same graphics techniques such as Texture Compression, Mipmap, MSAA and some special effects and so on. However, handling a font is a sort of huge part, for this reason, many game developers use a game engine/UI libraries.
https://www.twitch.tv/heroseh
Works on a Pure C + OpenGL User Interface Library daily at about 9AM(EST).
Here is their github repo for the project:
https://github.com/heroseh/vui
I myself am in the middle of stubbing in a half-assed user interface that
is just a list of clickable buttons. ( www.twitch.com/kanjicoder )
The basic idea I ran with is that both the GPU and CPU need to know about your
data. So I store all the required variables for my UI in a texture and then
sync that texture with the GPU every time it changes.
On the CPU side its a uint8 array of bytes.
On the GPU side it's unsigned 32 bit texture.
I have getters and setters both on the GPU (GLSL code) and CPU (C99) code that
manage the packing and unpacking of variables in and out of the pixels of the texture.
It's a bit crazy. But I wanted the "lowest-common-denominator" method of creating
a UI so I can easily port this to any graphics library of my choice in the future.
For example... Eventually I might want to switch from OpenGL to Vulkan. So if I keep
most of my logic as just manipulations of a big 512x512 array of pixels, I shoudn't
have too much refactoring work ahead of me.

Confused about the sense of GDI at all [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I was starting to learn Windows programing few years ago, and I always used only "native" environment for my needs. I mean, I write code only using Winapi, not DirectX\Draw\2D\etc for graphics, not any external libraries for music or something else, just only Winapi. I'm working on the next evolution step of my graphical render due last time. Previouly algorithms were good, they redraws only parts of windows that should to be redraw, and it works for not fullscreened windows. But when I worked with full-screen cases, I've got very small fps.
So, few monthes ago, I've understood, that I can make a brand new algorithm: instead of make draws in WM_PAINT, every time recreate dc'c, bitmaps, I can run a parallel thread, where in eternal loop goes redrawing, dc'c and bitmaps creates only one time + I even can don't use Gdi or Gdi+ functions such as Rectangle, Graphics::FillRect, but write my own faster functions. So I did it. And what I've got:
62 fps, 1920\1080 with no any graphical load
why?
It's just this code
void render()
{
COLORREF *matrix;
matrix = re->GetMatrix();
while (1)
{
Sleep(1000 / 120);
re->Render();
//below goes fps counter, that counts in another thread
while (!mu.try_lock())
{
}
frames++;
mu.unlock();
}
}
re->Render function
inline void Card::Render()
{
//SetDIBits(hdcc, bm, 0, bi.bmiHeader.biHeight, matrix, &bi, DIB_RGB_COLORS);
//StretchBlt(hdc, 0, 0, width, height, hdcc, 0, 0, width, height, SRCCOPY);
//method above with Stretch or just BitBlt is awfull at all
SetDIBitsToDevice(hdc, 0, 0, width, height, 0, 0, 0, bi.bmiHeader.biHeight, matrix, &bi, DIB_RGB_COLORS);//hdc is a surface dc, not memory
}
So if I understanding well, it is the maximum, that can be taken from Gdi. If I right, the question is - what is the sense of Gdi? Computer games were developed on Direct 2D then DirectX\OpenGL, user interfaces, before NT 8 were not window less, and(or) used DirectDraw. I'm confused, is it really to write good software render do not using any library, just by yourself?
Part of the problem is here:
Sleep(1000 / 120);
comes out to 8 ms (after integer division). But Sleep is not a very precise timing mechanism. It will sleep for at least the amount of time specified. And, with the default clock tick rate, it will sleep for at least 15.6 ms on most configurations. A frame duration of 15.6 ms is very close to 62 frames per second, so that's probably the root problem.
Beyond that, you will have problems with GDI because the graphics operations are largely performed in system memory which then has to be transferred to graphics memory. At higher resolutions, it can be difficult to do this at a high frame rate, depending on the hardware in use.
I’m not sure I understand your question but here’s my best guess.
I can run a parallel thread
Not a good idea for GDI. WinAPI can go multithreading, but it’s tricky to use correctly, see this article: https://msdn.microsoft.com/en-us/library/ms810439.aspx
62 fps, 1920\1080 with no any graphical load. why?
GDI wasn’t designed for the high-FPS use case you want from that. It was designed to redraw stuff when something changed. It was designed before modern GPUs.
On modern hardware, the way to get high FPS and/or low latency rendering is by using GPU-centric technologies. In C++, that’s Direct3D and Direct2D.
is it really to write good software render do not using any library
Sure it’s possible. Just not the way you’re trying to do it with SetDIBitsToDevice.
On modern hardware + OS, your API calls (regardless on what the API is) will become commands sent to 3D GPU. That’s why newer GPU-centric APIs like D3D and D2D often deliver better performance.
If you want to implement a software render that’s fine, just keep in mind you have to upload the result to a GPU texture. On modern Windows, WinAPI does just that under the hood.
Legacy Windows (i.e. anything before Vista) didn’t rely on GPU for that. But games didn’t use GDI either, they used DirectX 9, DirectDraw, etc…

Want to write a screen capture recording app, where do I start? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to start writing an application that can capture screen content, or capture specific full screen app content, but I am not sure where to start.
Ideally this would be written using OpenGL but I don't know the capabilities for OpenGL to capture application screen content. If I could use OpenGL to capture, let's say World of Warcraft, that would be perfect.
the capabilities for OpenGL to capture application screen content
are nonexistent. OpenGL is an API for getting things on the screen. There's exactly one function to retrieve pixels back from OpenGL (glReadPixels) and it's only asserted to work for things that have been drawn by the very OpenGL context with which that call to glReadPixels is made; and even that is highly unreliable for anything but off-screen FBOs, since the operating system is at liberty to clobber, clear or otherwise alter the main window's framebuffer contents at any time.
Note that you can find several tutorials on how to do screenshots with OpenGL scattered around the internet. And none of them works on modern computer systems, because the undefined behaviour on which those rely (all windows on a screen share one large contiguous region of the GPUs scanout framebuffer) no longer holds in modern graphics systems (ever window owns its own, independent set of framebuffers and the on-screen image is composited from those).
Capturing screen content is a highly operating system dependent task and there's no silver bullet on how to approach it. Some systems provide ready to use screen capture APIs; however depending on the performance requirements those screen capture APIs may not be the best choice. Some capture programs inject a DLL into each and every process to tap into the rendering process right at the source of the generated images. And some screen capture systems install a custom kernel driver to get access to the graphics cards video scanout buffer (which is usually mapped into system address space), bypassing the graphics card's driver to copy out the contents.

Drawing graphics with C++ (How does this work?) (And tips for a faster compiler?) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've been making random stuff in Gamemaker and Flash for about 4-5 years now, but I've always wanted to write code in C++. (I don't want to use game engines, I am more interested in writing my own.)
My goal is to eventually write an engine in an early 3D style (I want to mimic that PSX look of low poly characters, low-res textures, 'swimming' textures and poly's etc.) just as a throwback to the games I used to play as a kid.
But I want to start out small.
After borrowing some books from the library about the basics and simple codes (which was a fun experience), I wanted to take it a little step further and find out how graphics work with C++. Not 3D graphics, just graphics in general. (Maybe a little sprite) I want to make something simple and get an insight on how this works.
I want to draw graphics in a new window (320x240, no anti-aliasing) and get rid of the console window. I'm a beginner, I don't really understand how this works, but from what I understand, C++ is just a programming language and I'll have to include something else (which I don't know) to draw the graphics.
I'm using Notepad++ and MinGW for compiling my code. (though compiling goes really slow. I'd love to know a better and faster, but free compiler that just like MinGW, works across more than one platform)
I hope someone could help me out.
Thanks,
~A very enthusiastic newbie (and modeller/artist/musician) with big ideas.
MinGW is a port of GCC, which works on all UNIX-like systems.
If you want a true cross-platform compiler, you can use LLVM/Clang (free & Open Source (BSD license)) or the Intel compiler (commercial, but faster).
To draw things, you can either: use native APIs (pain), use a graphical toolkit (DirectX, OpenGL...) (also pain), use a library (for 2D and handling of stuff, SDL is popular, but I like SFML more) (for 3D, you might consider GLFW).
Look at all of those, decide which ones you like more (they're all cross-platform) and read their documentation/API/Tutorials.
SFML API example (direct answer to question on how to draw a sprite)
#include <SFML/Graphics.hpp>
int main()
{
sf::RenderWindow window(sf::VideoMode(600,800), "Example!");//window information, can be more precise
sf::Texture texture;//a texture for the sprite
if(!texture.loadFromFile("MySprite.png")//load the texture
{
return 1;
}
sf::Sprite sprite(texture);//make the sprite from the texture
while(window.isOpen())//until window dies
{
sf::Event event; //event handling
while(window.pollEvent(event))
{
if(event.type == sf::Event::Closed)
window.close();// let window close if it's closed
}
window.clear(); //clear screen
window.draw(sprite);//draw the sprite wherever it is
window.display();//switch buffers
}
}
Also, a small side note. Development (that's fully cross platform) tends to work better on UNIX (because you don't have to go through the massive pain of setting up on windows, remember how painful ANY library is to get working?) I suggest you also look at CMake (or any of its alternatives), vim (or geany, if vim is too hardcore) and compiling through the command line :)

is there a good tutorial on terrain editor? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am new to 3D Game programming, now studying a lot on DirectX and OpenGL, on Windows.
But I come up with making a terrain editor, But I cannot obtain any open tutorial or ideas on the web.
is there a good tutorial or open source code for learning this?
Simple one is fine, I just wonder how to elevate or lower terrain, or putting a tree on the map, like the following video.
like the following video: http://www.youtube.com/watch?v=oaAN4zSkY24
First I want to say if you haven't chosen between OpenGL and DirectX, then it would be a good idea to do so. My choice is to use OpenGL since OpenGL is cross-platform and works on Windows, Linux, Solaris, Mac, Smartphones, etc. Where DirectX only supports Windows machines.
I can't give you a tutorial or open source code, since this is kinda big, even just a "simple terrain editor", that is still a very complex thing. Though, what I can give you, is some points you need to know about and read about, which if you know these then you will be able to create a "terrain editor".
Points you need to be able to do.
VBO's
Shaders
Multi Texturing
Picking / Ray Picking / 3D Picking
VBO's
A VBO or Vertex Buffer Object, is a way to upload vertex data (positions, normals, texture coordinates, colors, etc) to the GPU itself, this allows for really fast rendering and this is also currently the best way to render. Be aware that this is an OpenGL feature, though DirectX might have a feature like this as well.
Shaders & Multi Texturing
Shaders is for shading/coloring vertices and fragments of all primitives. OpenGL uses GLSL where DirectX uses HLSL, they are both very similar.
Multi Texturing is basically where you bind multiple textures and then through a shader calculate which texture to use for the current vertex/fragment. This way you will be able to achieve what you saw in the video.
Picking / Ray Picking / 3D Picking
Picking is the process of "shooting" a ray from the camera (3D space) or the mouse (2D screen space), then each time the ray hits/collides with something those things will be returned to the user. In you case, you would use the mouse (2D screen space) to create a picking ray, and then at the point on the terrain the ray hits, that is the point where we would want to change the terrain.
If you know nothing about Picking then try Googling, I found that it can be really hard to find good results for 3D related things as well, so if you want to you can read a question I posted some time ago here on Stack Overflow (click here to see the post), the post covers 3D Camera Picking and 2D Screen Space Picking, and there is code and I added my final code to the post itself also.
Extra
If you combine all these things you will be able to create a "terrain editor".
Some of the things I've explained might be OpenGL related, but there sure are things in DirectX which can perform the same kind of things.