OpenGL equivalent of GDI's HatchBrush or PatternBrush? - opengl

I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL.
I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush.
More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on.
Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"?
I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled.
Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn?
Thanks in advance for any help.

If I understood correctly, you're looking for glPolygonStipple() or glLineStipple().
PolygonStipple is very limited as it allows only 32x32 pattern but it should work like PatternBrush. I have no idea how to implement it in VB though.

First of all, are you sure it's the drawing operation itself that is the bottleneck here? Visual Basic is known for being very slow (Especially if your program is compiled to intermediary VM code - which is the default AFAIRC. Be sure you check the option to compile to native code!), and if it is your code that is the bottleneck, then OpenGL won't help you much - you'll need to rewrite your code in some other language - probably C or C++, but any .NET lang should also do.
OpenGL contains functions that allow you to draw stippled lines and polygons, but you shouldn't use them. They're deprecated for a long time, and got removed from OpenGL in version 3.1 of the spec. And that's for a reason - these functions don't map well to the modern rendering paradigm and are not supported by modern graphics hardware - meaning you will most likely get a slow software fallback if you use them.
The way to go is to use a small texture as a mask, and tile it over the drawn polygons. The texture will get stretched or compressed to match the texture coordinates you specify with the vertices. You have to set the wrapping mode to GL_REPEAT for both texture coordinates, and calculate the right coordinates for each vertex so that the texture appears at its original size, repeated the right amount of times.
You could also use the stencil buffer as you described, but... how would you fill that buffer with the pattern, and do it fast? You would need a texture anyway. Remember that you need to clear the stencil buffer every frame, before you start drawing. Not doing so could cost you a massive performance hit (the exact value of "massive" depending on the graphics hardware and driver version).
It's also possible to achieve the desired effect using a fragment shader, but learning shaders for that would be an overkill for an OpenGL beginner like yourself :-).

Ah, I think I've found it! I can make a stencil across the entire viewport in the shape of the pattern I want (or its mask, I guess), and then enable that stencil when I want to draw with that pattern.

You could just use a texture. Put the pattern in as in image and turn on texture repeating and you are good to go.

Figured this out a a year or two ago.

Related

How to best render to a window, when pixels could be written directly to display buffer?

I have used OpenGL pretty exclusively for all my rendering, to the point that I'm unaware of any other way to write pixels to a window unfortunately.
This is a problem because my current project is a work tool that emulates an LCD display (pixel perfect, 2D, very few pixels are touched each frame, all 'drawing' can be done with memcpy() to a pixel buffer) and I feel that OpenGL might be too heavy for this, but I could absolutely be wrong in that assumption.
My goal is to borrow as little CPU time as possible. What's the best way to draw pixels to a window, in this limited way, on a modern typical machine running windows 10 circa 2019? Is OpenGL suited for this type of rendering, or should I adopt another rendering method in this case? And if so, what would that method be?
edit:
I should also mention, OpenGL can be used right away for me. If rendering textured triangles with an optimal setup is the fastest method, then I can already do that. Anything that just acts as an API over OpenGL or DirectX will likely be worse in my case.
edit2:
After some research, and thanks to the comments, I think I may just use OpenGL with Pixel Buffer Objects to optimize pixel uploads and keep rendering inexpensive.

Increase Performance in OpenGL

I have a question about picking. I found a really neat way to do picking by rendering a single display call with lights turned off and everything and unique colour for each object, then simply finding what colour pixel is under the mouse pointer. It works fine and from I understand that also been used by others in the past for object picking. I thought rendering a single frame in that way will not be noticeable, however it is noticeable. I have to say I am using JOGL bindings.
So I was wondering what factors affect the performance in this case? I read on Apples dev guide NOT to use glBegin and glEnd and instead use buffers. Is that right? Am I right to think that it is possible to render a single frame without anyone noticing the flicker?

OpenGL Picking from a large set

I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.

how to make opengl mipmaps sharper?

I am rewriting an opengl-based gis/mapping program. Among other things, the program allows you to load raster images of nautical charts, fix them to lon/lat coordinates and zoom and pan around on them.
The previous version of the program uses a custom tiling system, where in essence it manually creates mipmaps of the original image, in the form of 256x256-pixel tiles at various power-of-two zoom levels. A tile for zoom level n - 1 is constructed from four tiles from zoom level n, using a simple average-of-four-points algorithm. So, it turns off opengl mipmapping, and instead when it comes time to draw some part of the chart at some zoom level, it uses the tiles from the nearest-match zoom level (i.e., the tiles are in power-of-two zoom levels but the program allows arbitrary zoom levels) and then scales the tiles to match the actual zoom level. And of course it has to manage a cache of all these tiles at various levels.
It seemed to me that this tiling system was overly complex. It seemed like I should be able to let the graphics hardware do all of this mipmapping work for me. So in the new program, when I read in an image, I chop it into textures of 1024x1024 pixels each. Then I fix each texture to its lon/lat coordinates, and then I let opengl handle the rest as I zoom and pan around.
It works, but the problem is: My results are a bit blurrier than the original program, which matters for this application because you want to be able to read text on the charts as early as possible, zoom-wise. So it's seeming like the simple average-of-four-points algorithm the original program uses gives better results than opengl + my GPU, in terms of sharpness.
I know there are several glTexParameter settings to control some aspects of how mipmaps work. I've tried various combinations of GL_TEXTURE_MAX_LEVEL (anywhere from 0 to 10) with various settings for GL_TEXTURE_MIN_FILTER. When I set GL_TEXTURE_MAX_LEVEL to 0 (no mipmaps), I certainly get "sharp" results, but they are too sharp, in the sense that pixels just get dropped here and there, so the numbers are unreadable at intermediate zooms. When I set GL_TEXTURE_MAX_LEVEL to a higher value, the image looks quite good when you are zoomed far out (e.g., when the whole chart fits on the screen), but as you zoom in to intermediate zooms, you notice the blurriness especially when looking at text on the charts. (I.e., if it weren't for the text you might think "wow, opengl is doing a nice job of smoothly scaling my image." but with the text you think "why is this chart out of focus?")
My understanding is that basically you tell opengl to generate mipmaps, and then as you zoom in it picks the appropriate mipmaps to use, and there are some limited options for interpolating between the two closest mipmap levels, and either using the closest pixels or averaging the nearby pixels. However, as I say, none of these combinations seem to give quite as clear results, at the same zoom level on the chart (i.e., a zoom level where text is small but not minuscule, like the equivalent of "7 point" or "8 point" size), as the previous tile-based version.
My conclusion is that the mipmaps that opengl creates are simply blurrier than the ones the previous program created with the average-four-point algorithm, and no amount of choosing the right mipmap or LINEAR vs NEAREST is going to get the sharpness I need.
Specific questions:
(1) Does it seem right that opengl is in fact making blurrier mipmaps than the average-four-points algorithm from the original program?
(2) Is there something I might have overlooked in my use of glTexParameter that could give sharper results using the mipmaps opengl is making?
(3) Is there some way I can get opengl to make sharper mipmaps in the first place, such as by using a "cubic" filter or otherwise controlling the mipmap creation process? Or for that matter it seems like I could use the same average-four-points code to manually generate the mipmaps and hand them off to opengl. But I don't know how to do that...
(1) it seems unlikely; I'd expect it just to use a box filter, which is average four points in effect. Possibly it's just switching from one texture to a higher resolution one at a different moment — e.g. it "Chooses the mipmap that most closely matches the size of the pixel being textured", so a 256x256 map will be used to texture a 383x383 area, whereas the manual system it replaces may always have scaled down from 512x512 until the target size was 256x256 or less.
(2) not that I'm aware of in base GL, but if you were to switch to GLSL and the programmable pipeline then you could use the 'bias' parameter to texture2D if the problem is that the lower resolution map is being used when you don't want it to be. Similarly, the GL_EXT_texture_lod_bias extension can do the same in the fixed pipeline. It's an NVidia extension from a decade ago and is something all programmable cards could do, so it's reasonably likely you'll have it.
(EDIT: reading the extension more thoroughly, texture bias migrated into the core spec of OpenGL in version 1.4; clearly my man pages are very out of date. Checking the 1.4 spec, page 279, you can supply a GL_TEXTURE_LOD_BIAS)
(3) yes — if you disable GL_GENERATE_MIPMAP then you can use glTexImage2D to supply whatever image you like for every level of scale, that being what the 'level' parameter dictates. So you can supply completely unrelated mip maps if you want.
To answer your specific points, the four-point filtering you mention is equivalent to box-filtering. This is less blurry than higher-order filters, but can result in aliasing patterns. One of the best filters is the Lanczos filter. I suggest you calculate all of your mipmap levels from the base texture using a Lanczos filter and crank up the anisotropic filtering settings on your graphics card.
I assume that the original code managed textures itself because it was designed to view data sets that are too large to fit into graphics memory. This was probably a bigger problem in the past, but is still a concern.

BlitzMax - generating 2D neon glowing line effect to png file

I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game.
Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve.
The problems I've been having are:
Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated.
Using just 2D calls could be advantageous, simpler to understand and re-use.
It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff.
p.s. I've reviewed this post (and others), have the samples working, and even developed my own 5x5 shader. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well.
http://www.blitzbasic.com/Community/posts.php?topic=85263
Ok, well I don't know about BlitzMax, so I can't go into much detail regarding implementation, but to give you some pointers:
For applying shaders to specific parts of the image only, you will probably want to use multiple rendering passes to compose your scene.
If you have pixel access, doing the same things that fragment shaders do is, of course, possible "the oldskool way" in 2D, ie. something like getpixel/setpixel. However, you'll have much poorer performance this way.
If you have a texture with an alpha channel intact, saving in PNG with an alpha channel should Just Work (sorry, once again no idea how to do this in BlitzMax specifically). Just make sure you're using RGBA modes all along.