curves representable in OpenGL - opengl

I am a beginner in CAD development & want to know some things about OpenGL.
My main objective is to represent conics, cycloid, epicycloid, hypocycloid, involutes, etc
Can i directly represent them using some trigonometry, or do i need to convert these curves into B-Spline?
Actually i am currently developing the kernel & want to develop the kernel so that i cant display the above mentioned curves.(there is no use in supporting these curves in kernel if i cant graphically represent them!)
I dont know much about OpenGL, so please pardon me if my question is really stupid!
I tried searching over here but could not find anything useful.

OpenGL can directly render Bezier curves and surfaces using evaluators and even NURBS using the GLU API. See the OpenGL Programming Guide for more information. So you could transform those curves and surfaces into this form.
But I highly recommend you not to use these features, as they are deprecated (dropped from the core of newer OpenGL versions) and nowadays likely to be implemented in software and not in hardware.
Instead you should rather implement your own evaluation routines for such curves and surfaces, that evaluate the corresponding equations at a specified sampling rate and generate a simple vertex array (and maybe and index array). This way you stay future-ready as these can be rendered as standard line strips or triangular meshes using VBOs (the only way to render something in modern OpenGL).
And you even stay API agnostic, as a general vertex array can also be rendered using Direct3D or whatever. So this way you don't pollute your CAD kernel with draw calls. All it needs is a function to transform parametric curves and surfaces into arrays of vertices (and maybe indices) and the client/user of the kernel is responsible for drawing these with whatever API he likes.

If I am not wrong, OpenGL only works with flat polygons. Even though, you can check if the GLUT libraries have any method to draw the aforementioned figures, or google for a .obj of those figures, and scale, rotate and translate them to the desired position.

Related

way to have 3d animated/rigged character in Opengl

If I want a 3D animated/rigged character in OpenGL game how i would have it in OpenGL?If i make a animated/rigged character in 3Ds max is it possible to have that character in OpenGl?would be helpful if someone gives a proper way or a tutorial link to export animated model from 3d software to open GL
OpenGL is a very simple 3D framework which only offers the bare bones. OpenGL can display triangles and fill them with color and that's about it. It comes with some helper functions to manipulate point clouds but it's really very basic.
Animation and rigging are way beyond what it can do. What you need is a framework which supports animation and rigging which then uses OpenGL to display the result.
Since you don't specify any requirements, it's hard to say which one would be good for you. Blender is probably a good place to start. It's game engine runs on many platforms, supports OpenGL, animation and rigging.

3D model manipulation for a Desktop Augmented Reality application

I'm working on an Augmented Reality project that uses multiple markers to get positions for 3D models that I'm planning to overlay. (I'm doing this from scratch using OpenCV and I'm not using ARToolkit or any other off the shelf marker detection libraries).
Environment: Visual C++ 2008, Windows 7, Core2Duo 1GB ram, OpenCV 2.3
I want the 3D models to be manipulated by user so it will turn out to a sort of simulation.
For this I'm planning to use OpenGL. What are your suggestions, recommendations? Can the simulation part be done by using OpenGL itself or will i need to use something like OpenSceneGraph/ODE/Unity 3D/Ogre 3D?
This is for an academic project so better if I can produce more self-coded system rather than using off-the-shelf products.
it would seem that OpenGL is pretty enough for your needs (drawing a model with a specific colour and size).
If you're new to OpenGL, and you are not going to be using it for your future projects, it might be easier to use the old fixed-function pipeline, which already has the lighting and color system ready and doesn't require you to learn how to write shaders.
For your project, you will need a texture where you would copy the image from camera using glTexSubImage2D() which you would in turn draw to background (or you can use glDrawPixels() in case you don't require any scaling). After that, you need to have your model, complete with normals for lighting. Models can be eg. exported from Blender or 3DS Max to ascii format, which is pretty easy to parse. Then you can draw the model. Colors can be changed using glColor3f() before drawing the model (make sure you don't specify different color while drawing the model). Positioning of the models is done using matrices. The old OpenGL have some handy and easy-to-use functions for rotating and translating objects. There are also functions for scaling the objects (changing size), so that is covered pretty easy. All you need is to figure out camera position, relative to the marker (which i believe is implemented in OpenCV).
If you were to use the forward-compatible OpenGL, you would need to set up vertex buffer objects to contain model data and write vertex and fragment shaders to shade and display your model. That's kinda more work for which you get extended flexibility. But you can use shaders in the old OpenGL as well, if you decide you need them (eg. for some special effects).
Learning how to use a scenegraph or an engine (ogre) can take some time, i would not recommend it for your task.

Should we use OpenGL for 2D graphics?

If we want to make an application like MS Paint, should we use OpenGL for render graphics?
I want to mention about performance if using traditional GDI vs. OpenGL.
And if there are exist some better libs for this purpose, please see me one.
GDI, X11, OpenGL... are rendering APIs, i.e. you usually don't use them for image manipulation (you can do this, but it requires some precautions).
In a drawing application like MS Paint, if it's pixel based, you'll normally manipulate some picture buffer with customary code, or a special image manipulation library, then send the full buffer to the rendering API.
If your data model consists of strokes and individual shapes, i.e. vector graphics, then OpenGL makes a quite good backend. However it may be worth looking into some other API for vector graphics, like OpenVG (which in its current implementations sits on top of OpenGL, but native implementations operating directly on the GPU may come).
In your usage scenario you'll not run into any performance problems on current computers, so don't choose your API from that criteria. OpenGL is definitely faster than GDI when it comes to texturing, alpha blending, etc. However depending on system and GPU pure GDI may outperform OpenGL for so simple things like drawing an arc or filling a complex self intersecting polygon with complex winding rules.
There is no good reason not to use OpenGL for this. Except maybe if you have years of experience with GDI but don't know a single thing about OpenGL.
On the other hand, OpenGL may very well be superior in many cases. Compositing layers or adjusting hue/saturation/brightness/contrast in a GLSL shader will be several orders of magnitude faster (in fact, pretty much "instantly") if there is a reasonably new card in the computer. Stroking a freedraw path with a "fuzzy" pen (i.e. blending a sprite with alpha transparency over and over again) will be orders of magnitude faster. On images with somewhat reasonable dimensions, most filter kernels should run close to realtime. Rescaling with bilinear filtering runs in hardware.
Such things won't matter on a 512x512 image, as pretty much everything is instantaneous at such resolutions, but on a typical 4096x3072 (or larger) image from your digital camera, it may be very noticeable, especially if you have 4-6 layers.

How to do ray tracing in modern OpenGL?

So I'm at a point that I should begin lighting my flatly colored models. The test application is a test case for the implementation of only latest methods so I realized that ideally it should be implementing ray tracing (since theoretically, it might be ideal for real time graphics in a few years).
But where do I start?
Assume I have never done lighting in old OpenGL, so I would be going directly to non-deprecated methods.
The application has currently properly set up vertex buffer objects, vertex, normal and color input and it correctly draws and transforms models in space, in a flat color.
Is there a source of information that would take one from flat colored vertices to all that is needed for a proper end result via GLSL? Ideally with any other additional lighting methods that might be required to complement it.
I would not advise to try actual ray tracing in OpenGL because you need a lot hacks and tricks for that and, if you ask me, there is not a point in doing this anymore at all.
If you want to do ray tracing on GPU, you should go with any GPGPU language, such as CUDA or OpenCL because it makes things a lot easier (but still, far from trivial).
To illustrate the problem a bit further:
For raytracing, you need to trace the secondary rays and test for intersection with the geometry. Therefore, you need access to the geometry in some clever way inside your shader, however inside a fragment shader, you cannot access the geometry, if you do not store it "coded" into some texture. The vertex shader also does not provide you with this geometry information natively, and geometry shaders only know the neighbors so here the trouble already starts.
Next, you need acceleration data-structures to get any reasonable frame-rates. However, traversing e.g. a Kd-Tree inside a shader is quite difficult and if I recall correctly, there are several papers solely on this problem.
If you really want to go this route, though, there are a lot papers on this topic, it should not be too hard to find them.
A ray tracer requires extremely well designed access patterns and caching to reach a good performance. However, you have only little control over these inside GLSL and optimizing the performance can get really tough.
Another point to note is that, at least to my knowledge, real time ray tracing on GPUs is mostly limited to static scenes because e.g. kd-trees only work (well) for static scenes. If you want to have dynamic scenes, you need other data-structures (e.g. BVHs, iirc?) but you constantly need to maintain those. If I haven't missed anything, there is still a lot of research currently going on just on this issue.
You may be confusing some things.
OpenGL is a rasterizer. Forcing it to do raytracing is possible, but difficult. This is why raytracing is not "ideal for real time graphics in a few years". In a few years, only hybrid systems will be viable.
So, you have three possibities.
Pure raytracing. Render only a fullscreen quad, and in your fragment shader, read your scene description packed in a buffer (like a texture), traverse the hierarchy, and compute ray-triangles intersections.
Hybrid raytracing. Rasterize your scene the normal way, and use raytracing in your shader on some parts of the scene that really requires it (refraction, ... but it can be simultated in rasterisation)
Pure rasterization. The fragment shader does its normal job.
What exactly do you want to achieve ? I can improve the answer depending on your needs.
Anyway, this SO question is highly related. Even if this particular implementation has a bug, it's definetely the way to go. Another possibility is openCL, but the concept is the same.
As for 2019 ray tracing is an option for real time rendering but requires high end GPUs most users don't have.
Some of these GPUs are designed specifically for ray tracing.
OpenGL currently does not support hardware accelerated ray tracing.
DirectX 12 on windows does have support for it. It is recommended to wait a few more years before creating a ray tracing only renderer although it is possible using DirectX 12 with current desktop and laptop hardware. Support from mobile may take a while.
Opengl (glsl) can be used for ray (path) tracing. however there are few better options: Nvidia OptiX (Cuda toolkit -- cross platform), directx 12 (with Nvidia ray tracing extension DXR -- windows only), vulkan (nvidia ray tracing extension VKR -- cross platform, and widely used), metal (only works on MacOS), falcor (DXR, VKR, OptiX based framework), Intel Embree (CPU ray tracing only).
I found some of the other answers to be verbose and wordy. For visual examples that YES, functional ray tracers absolutely CAN be built using the OpenGL API, I highly recommend checking out some of the projects people are making on https://www.shadertoy.com/ (Warning: lag)
To answer the topic: OpenGL has no RTX extension, but Vulkan has, and interop is possible. Example here: https://github.com/nvpro-samples/gl_vk_raytrace_interop
As for the actual question: To light the triangles, there are tons of techniques, look up for "forward", "forward+" or "deferred" renderers. The technique to be used depends on you goal. The simplest and most good-looking these days, is image based lighting (IBL) with physically based shading (PBS). Basically, you use a cubemap and blur it more or less depending on the glossiness of the object. For a simple object viewer you don't need more.

C++ D3DX Font and transformations (d3d9 and d3d10 solutions needed)

I want to render font in a way that takes account of the current transforms and similar settings, especially the projection transform and viewport.
I'm thinking that the best way to do that is to have an off screen surface to render the text to, and then render that surface where I really want the text.
However I'm not certain on a number of aspects of this solution.
Is this the best way to go about it at all?
Are there far better free font renderers around that id be better off spending my time with that allow such things. I see alot of people complaining about the d3dx font interfaces for various reasons, but never a link to a better unicode capable renderer...?
Is there any advantage to useing certain surface formats and/or surface sizes (eg always using the smallest possible rather than some standard large one, which requires the extra step of trying to work the size out...)
Yeah, render to texture and then drawing a textured quad to orient and position the text is going to be the easiest way to realize this functionality.
As for D3DX text renderers, it really depends on which SDK you are using. DirectWrite (only for Windows 7 and Vista) will provide a higher quality text rendering approach for applications that need high quality text rendering in a manner that is interoperable with Direct3D.
You can of course do your own rasterization. There are font rasterization engines out there that are open source that could be repurposed for this need, but we're talking tons of coding here for a benefit that may not be distinguishable enough to warrant the development expense.
Having said that, there's a completely new alternative available to you with Direct3D and shaders, provided that you have access to the glyph outlines as curve data. The idea is to use the shader to rasterize the text and store the curve definitions in the vertex stream and associated textures. Try looking at this paper, which describes the technique.