I'm trying to create a go board using opengl. To do this, I'm trying to draw a bunch of lines to create the grid. However, every tutorial site (including opengl's) has the examples in C++ and the Haskell wiki doesn't do a good job of explaining it. I'm new to opengl and would like a tutorial.
I'll assume that you want to use OpenGL 2.1 or earlier. For OpenGL 3.0, you need different code.
So, in C you would write this:
glBegin(GL_LINES);
glVertex3f(1, 2, 3);
glVertex3f(5, 6, 7);
glEnd();
You write the equivalent in Haskell like this:
renderPrimitive Lines $ do
vertex $ Vertex3 1 2 3
vertex $ Vertex3 5 6 7
With this code, since I used e.g. 1 instead of some variable, you might get errors about ambiguous types (So you should replace 1 with (1 :: GLfloat)), but if you use actual variables that already have the type GLfloat, you shouldn't have to do this.
Here's a complete program that draws a white diagonal in the window:
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
main :: IO ()
main = do
-- Initialize OpenGL via GLUT
(progname, _) <- getArgsAndInitialize
-- Create the output window
createWindow progname
-- Every time the window needs to be updated, call the display function
displayCallback $= display
-- Let GLUT handle the window events, calling the displayCallback as fast as it can
mainLoop
display :: IO ()
display = do
-- Clear the screen with the default clear color (black)
clear [ ColorBuffer ]
-- Render a line from the bottom left to the top right
renderPrimitive Lines $ do
vertex $ (Vertex3 (-1) (-1) 0 :: Vertex3 GLfloat)
vertex $ (Vertex3 1 1 0 :: Vertex3 GLfloat)
-- Send all of the drawing commands to the OpenGL server
flush
The default OpenGL fixed-function projection uses (-1, -1) for the bottom left and (1, 1) for the top right of the window. You need to alter the projection matrix to get different coordinate spaces.
For more complete examples like this, see the Haskell port of the NEHE tutorials. They use the RAW OpenGL bindings, which are more like the C bindings.
A quick google turned up this:
http://www.haskell.org/haskellwiki/OpenGLTutorial1
In any case, since OpenGL is originally a C library, you may want to get your feet wet in C (or C++) first, since you'll be able to use the original OpenGL documentation as-is; after that, you may want to dig into the Haskell bindings and see how you use the same OpenGL calls in Haskell.
Related
I'm currently following a course of advanced functional programming using OCaml. For the programming project I have the task of implementing a 3D plotter that will be used for graphing functions. I'm getting started with lablgl and GLUT but unfortunately I haven't found good tutorials on GLUT using OCaml. I've managed to plot a 2D graph in a very naive way. I figure it's probably not the right way to do it as I don't really understand what I'm doing. Could someone please help me understand what's the strategy for 3D plotting using GLUT? I'm really stuck when trying to implement a 3D version of what I've done.
Here's the code I'v written for plotting a function in in 2D.
open Gl;;
open GlMat;;
open GlDraw;;
open GlClear;;
open Glut;;
(* Transform RGB values in [0.0 - 1.0] to use it with OpenGL *)
let oc = function
x -> float x /. 255.
;;
(* The function to be graphed *)
let expression = function
x -> sin (10. *. x) /. (1. +. x *. x)
;;
(* The rendering function drawing 2000 points in 400x400 canvas *)
let display () =
GlClear.color (oc 255, oc 255,oc 255);
clear [`color];
load_identity ();
begins `lines;
GlDraw.color (oc 0, oc 0, oc 0);
List.iter vertex2 [-1.,0.; 1.,0.];
List.iter vertex2 [0.,-1.;0.,1.];
ends ();
begins `points;
for i=0 to 2000 do
let x = (float i -. 1000.) /. 400. in
let y = expression (x) in
vertex2 (x,y);
done;
ends ();
swapBuffers ();
flush();
;;
(* general stuff and main loop *)
let () =
ignore (init Sys.argv);
initWindowSize ~w:400 ~h:400;
initDisplayMode ~double_buffer:true ();
ignore (createWindow ~title:"Sin(x*10)/(1+x^2)");
mode `modelview;
displayFunc ~cb:display;
idleFunc ~cb:(Some postRedisplay);
keyboardFunc ~cb:(fun ~key ~x ~y -> if key=27 then exit 0);
mainLoop ()
;;
The best tutorial for GLUT, that I've seen so far, is the Chapter 6 of OCaml for Scientists book. You probably might also seen this tutorial from the same authors. It may help you.
This libraries, as well as any other library implementing OpenGL in any other language are usually underdocumented, because it is assumed, that a user already knows how OpenGL works. So it might be a good idea, to start from some OpenGL book, and follow it using OCaml. In that case, Tgsl library, that provides a thin OpenGL bindings to OCaml, will work better.
Last, but not least, OpenGL visualization is not the best idea for final project in functional programming course. OpenGL is very imperative by its nature and has nothing to do with functional programming. Also, you will learn nothing about using OCaml, as you will actually use OCaml as C.
If you still strive to do some visualization, then it might be a better idea to move away from OpenGL and 3d graphics to 2d and use declarative Vg library, that is purely functional.
I'm trying to implement mouse picking in a small application written in haskell. I want to retrieve the projection matrix that has been set with this code found in the resize function that gets called when the window resizes itself:
resize w h = do
GL.viewport $= (GL.Position 0 0, GL.Size (fromIntegral w) (fromIntegral h))
GL.matrixMode $= GL.Projection
GL.loadIdentity
GL.perspective 45 (fromIntegral w / fromIntegral h) 1 100
The best I've achieved so far is to set the current matrix to GL.Projection and then trying to read the GL.currentMatrix statevar like this:
GL.matrixMode $= GL.Projection
pm <- GL.get GL.currentMatrix
-- inverse the matrix, somehow, and multiply this with the clip plane position of
-- of the mouse
This doesn't work and produces this error:
Ambiguous type variable `m0' in the constraint:
(GL.Matrix m0) arising from a use of `GL.currentMatrix'
Probable fix: add a type signature that fixes these type variable(s)
In the first argument of `GL.get', namely `GL.currentMatrix'
In a stmt of a 'do' expression: pm <- GL.get GL.currentMatrix
I think I should be using some sort of type constraint when trying to get the matrix out of the StateVar, but changing the GL.get call to pm <- GL.get (GL.currentMatrix :: GL.GLfloat) just produces a different and equally puzzling message.
I know this is using the old deprecated OpenGL matrix stack and modern code should be using shaders and such to perform their own matrix handling, but I'm not quite comfortable enough in haskell to attempt to really do anything beyond the most basic of projects. If it's easy enough I would certainly try to convert what little rendering code I have to a more modern style, but I find it difficult to find suitable tutorials to help me along.
First thing is first: currentMatrix is deprecated and is removed in the most recent OpenGL package (2.9.2.0). In order to use the most recent version, you can upgrade the dependency in your .cabal file. If you look at the source, GL.currentMatrix is identical to calling GL.matrix Nothing.
Second: The error you're receiving is because Haskell doesn't know the type of matrix component (float or double) that you're trying to read from the GL state. You're on the right track about adding a type signature to the function call, but GL.currentMatrix has type
GL.Matrix m, GL.MatrixComponent c => GL.StateVar (m c)
Hence, you need to fully specify the type if you plan on using it in order to disambiguate it to haskell. If you're set on using the old fixed function pipeline, then the type signature should look something like this:
pm <- GL.get (GL.currentMatrix :: GL.StateVar (GL.GLmatrix GL.GLfloat))
That being said, your mouse picking code may still have problems because there're a couple of other factors that you need to account for:
You need both the modelview and projection matrices to get the proper world-space position of the ray into your scene. The call to GL.currentMatrix just gets the current matrix for whatever the current matrix mode is.
Inverting a 4x4 matrix isn't part of the OpenGL package, IIRC and you'll need your own inverting code.
Once you get the proper matrices, the OpenGL.GLU package has an unproject function that might do what you need
I have lots of text to draw. If I call D3DXFont::DrawText with first parameter being NULL I get terrible performance.
I heard that using D3DXFont with conjunction with D3DXSprites makes things much more faster.
How my application needs to draw strings?
It daraws every string with pseudo shadow. It means I draw each string 4 times in black:
x + 1, y + 1
x - 1, y + 1
x - 1, y - 1
x + 1, y - 1
and 1 time in actual color. It makes very nice looking always readable strings. I even switched to pixel fonts for faster rendering.
Now call that string with shadow (ShadowString).
Every frame I draw 256 (worst case scenario) of those ShadowStrings on screen.
I would like to know how to use sprites (or any other technique) to speed up drawing of those string as much as possible). Now I'm getting 30 FPS in app. But I target for 120 min. And problem is ONLY that text drawing thing.
Surely, you must profile your application before any optimizations, but truth to be told, D3DXFont/D3DXSprites and "fast" is mutually exclusive concepts. If they do not fit, just don't use them.
Use 3rd party libraries or make your own sprite/font renderer.
Recently I've answered about how to do it here: How to draw line and font without D3DX9 in DirectX 9?
Also, Google for "sprite font", "sprite batching", "texture atlases", "TTF rendering". It is not very difficult if you are familiar with API (notably vertex buffers and texturing), and there are plenty of examples on web. Don't hesitate to look for D3D11 or OpenGL examples, principles are the same.
How do I make allegro 5 use anti-aliasing when drawing? I need diagonal lines to appear smooth. Currently, they are only lines of shaded pixels, and the edges look hard.
To enable anti aliasing for the primitives:
// before creating the display:
al_set_new_display_option(ALLEGRO_SAMPLE_BUFFERS, 1, ALLEGRO_SUGGEST);
al_set_new_display_option(ALLEGRO_SAMPLES, 8, ALLEGRO_SUGGEST);
display = al_create_display(640, 480);
Note that anti-aliasing will only work for primitives drawn directly to the back buffer. It will not work anywhere else.
On OpenGL, your card must support the ARB_multisample extension.
To check if it was enabled (when using ALLEGRO_SUGGEST):
if (al_get_display_option(display, ALLEGRO_SAMPLE_BUFFERS)) {
printf("With multisampling, level %i\n",
al_get_display_option(display, ALLEGRO_SAMPLES));
}
else {
printf("Without multisampling.\n");
}
You have two options: line smoothing or multisampling.
You can activate line smoothing by using glEnable(GL_LINE_SMOOTH). Note that Allegro 5 may reset this when you draw lines through Allegro.
The other alternative is to create a multisampled display. This must be done before calling al_create_display. The way to do it goes something like this:
al_set_new_display_option(ALLEGRO_SAMPLE_BUFFERS, 1, ALLEGRO_REQUIRE);
al_set_new_display_option(ALLEGRO_SAMPLES, #, ALLEGRO_SUGGEST);
The # above should be the number of samples to use. How many? That's implementation-dependent, and Allegro doesn't help. That's why I used ALLEGRO_SUGGEST rather than REQUIRE for the number of samples. The more samples you use, the better quality you get. 8 samples might be a good value that's supported on most hardware.
I have a 2D list of vectors (say 20x20 / 400 points) and I am drawing these points on a screen like so:
for row in grid:
for point in row:
pygame.draw.circle(window, white, (particle.x, particle.y), 2, 0)
pygame.display.flip() #redraw the screen
This works perfectly, however it's much slower then I expected.
I want to rewrite this in C++ and hopefully learn some stuff (I am doing a unit on C++ atm, so it'll help) on the way. What's the easiest way to approach this? I have looked at Direct X, and have so far followed a bunch of tutorials and have drawn some rudimentary triangles. However I can't find a simple (draw point).
DirectX doesn't have functions for drawing just one point. It operates on vertex and index buffers only. If you want simpler way to make just one point, you'll need to write a wrapper.
For drawing lists of points you'll need to use DrawPrimitive(D3DPT_POINTLIST, ...). however, there will be no easy way to just plot a point. You'll have to prepare buffer, lock it, fill with data, then draw the buffer. Or you could use dynamic vertex buffers - to optimize performance. There is a DrawPrimitiveUP call that is supposed to be able to render primitives stored in system memory (instead of using buffers), but as far as I know, it doesn't work (may silently discard primitives) with pure devices, so you'll have to use software vertex processing.
In OpenGL you have glVertex2f and glVertex3f. Your call would look like this (there might be a typo or syntax error - I didn't compiler/run it) :
glBegin(GL_POINTS);
glColor3f(1.0, 1.0, 1.0);//white
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
glVertex2f(points[y][x].x, points[y][x].y);//plot point
glEnd();
OpenGL is MUCH easier for playing around and experimenting than DirectX. I'd recommend to take a look at SDL, and use it in conjuction with OpenGL. Or you could use GLUT instead of SDL.
Or you could try using Qt 4. It has a very good 2D rendering routines.
When I first dabbled with game/graphics programming I became fond of Allegro. It's got a huge range of features and a pretty easy learning curve.