Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
My university started teaching a course which includes OpenGL programming. They make us use FreeGLUT to create a window and a context for OpenGL, but I found an online course at lynda.com about OpenGL in which they use GLFW instead of FreeGLUT.
So I want to know which one should I use and what are the differences between the two?
FreeGLUT:
Based on the GLUT API.
GLUT has been around for about as long as OpenGL itself.
Many tutorials and examples out there use GLUT.
Takes care of implementing the event loop and works through callbacks (good for simple stuff, makes things like precisely timed animation loops and low latency input much harder though).
GLFW:
Designed from scratch with the experiences of other frameworks in mind.
Gives much finer control over context creation and window attributes.
GLFW-2 Provides basic threading support functions (thread creation, synchronization). –– removed from GLFW-3
GLFW-2 Provides basic image file loading support. –– removed from GLFW-3
Gives very detailed access to input devices.
Event loop is in control of the programmer which allows for much preciser timing and lower latency.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I’m developing an iOS app and solved the interoperability Swift-C++ in the CPU side wrapping the C++ classes in Objective-C. But on the GPU side, I don’t Know if exists any way of call and retrieve data from a pure C++ class (not MSL). Since MSL is C++ based my intuition says that yes but I didn’t find any information in that way…
I’m trying to use libraries like CGAL or FastNoise to update a massive number of particles.
Welcome!
I'm afraid you can't just access any data from any class in your GPU code. Programming for the GPU requires a very different paradigm. You have to explicitly send data to the GPU, trigger computational tasks a.k.a. "kernels" to run on the GPU, wait for their completion, and transfer the data back to the RAM.
I recommend you check out some tutorials on Metal. Here is for example a sample project where image data is modified on the GPU using compute kernels.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I would like to create a 'simple' 2D library with OpenGL. I want to use OpenGL because I know I will learn lot of things that's why I don't want to use a higher level library (like SDL).
I know there is some library to make OpenGL a bit easier:
freeglut: I saw a recent release (freeglut 3.2.1 in 29 September 2019) but is it still used?
glfw: it seems more modern but seems too high level
I don't know if we can compare these libraries but what is the 'best' library (between glfw/freeglut) for learning?
There is also GLEW but I don't understand what is it.. Is it required? I just know it's unrelated with freeglut or glfw..
What is the 'best' library (between glfw/freeglut) for learning?
The best library is not using any if you really want to learn all the details.
You will need to learn how to load OpenGL functions on the fly based on the OpenGL version and extensions you want. You also will learn how to use your windowing system (e.g. Win32, X11, etc.) to create a window suitable for OpenGL rendering.
Typically most developers avoid some or all of that by using a library that loads OpenGL functions (e.g. GLEW) and/or creates the native window (e.g. SDL, glfw, glut; which typically work for several platforms), but you can do it by yourself if you really want.
A good option is to pick SDL and use it only for window initialization. That leaves you to load the OpenGL functions you use, which is fairly easy. Then, when you need input, you can use the SDL input subsystem too.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I've been fighting with several libraries (irrlicht, Ogre3D etc) and falling between either too complex libraries or too complex installation guides.
I'd appreciate some pointers to how to achieve what the title suggests.
Thanks
If you're going for simplicity of setup and use, I'd recommend you Unreal Engine. It not only allows you to render both in 3D and 2D, but has a lot of other functionality including scripting in c++.
If you're looking for a more lightweight solution, though not as easy, you can try using OpenGL. It is quite easy to set up - just install Code::Blocks and start with their OpenGL template. Although it is much harder to learn and use, it is very developing.
If you don't really need to use c++, you can use Unity. Although its interface is exposed through c#, its not very hard to learn for a c++ programmer. I also find it easier to use than Unreal Engine.
I would also reccomend Vulkan, which would probably the most perforamant of all of them, but well, you wanted a "simple" solution.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have worked with SDL. I would like to know how I cound open the window with opengl context on windows
OpenGL on its own does not acknowledge the existence of the window, or any meaningful concept of a window. You need a windowing API. OpenGL will then have a rendering context passed to it by the windowing API.
<windows.h> is the WIN32 API for Windows, and the prototypical go-to API for creating windows when writing OpenGL applications for the first time on a WinOS computer. I don't know what the equivalents for MACOS and Linux are (X11, maybe?) but they have their own flavors.
For a wide variety of reasons (namely the fact that those APIs are old and arcane and obtuse to work with), there are a lot of APIs that wrap around the native Windowing API, and are much preferred for beginners. GLFW is one such example, and my personal preference, as it works out-of-box for Windows, Mac, and most Linux windowing APIs. SDL is another staple, though I get the sense it has fallen out of favor recently (it seems like it tries to do "too much" on its own).
I don't know the functionality of <glu.h>, but given that the whole thing is deprecated, I don't advise using it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm just working myself into the boost libraries and there's one question that stucks in my head :D
Is multithreading, especially the boost one and for game development, still used? I understand the pros of threading but I'm not sure if it's not obsolete. If not, where does it get used in game development?
RenX
Yes, you're still need it. No, it is not obsolete. Moreover, multi-thread support is the one of the most progressive and integral parts of the modern game development. Almost every modern game engine has multi-thread support. Why? Because animation and the physics; rendering and resource loading could performs simultaneously, and even physics itself may be parallellized, this also applies to the paging. What about boost, Ogre3D uses boost::threads for multi-threaded purposes (if you're an enthusiast, you should be familliar with Ogre3D, aren't you?). Unreal Engine 3 uses the rendering thread and the game-logic thread, separated from the main application's thread, moreover, UE 3.5 has Unreal Swarm as the job distribution system and Gemini as ultra-fast HDR rendering pipeline. So yes, it has sense.