I'm writing an object loader in lisp using cl-opengl, and when trying to render the loaded vertices/elements using glDrawElements, I'm left with a blank screen.
(require :cl-opengl)
(require :sdl2)
(defvar *vertices* nil)
(defvar *elements* nil)
(setf *vertices* (make-array 9 :fill-pointer 0))
(setf *elements* (make-array 9 :fill-pointer 0))
(defvar *vertex-shader* "
#version 330 core
layout (location = 0) in vec3 aPos;
void main() {
gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);
}
")
(defvar *fragment-shader* "
#version 330 core
out vec4 FragColor;
void main() {
FragColor = vec4(0.95f, 0.98f, 0.65f, 1.0f);
}
")
(defun split-str-1 (string &optional (separator " ") (r nil))
(let ((n (position separator string
:from-end t
:test #'(lambda (x y)
(find y x :test #'string=)))))
(if n
(split-str-1 (subseq string 0 n) separator (cons (subseq string (1+ n)) r))
(cons string r))))
(defun split-str (string &optional (separator " "))
(split-str-1 string separator))
(defun parse-float (number)
(with-input-from-string (in number)
(read in)))
(defun load-obj (file-name)
(let ((file (open file-name)))
(with-open-stream (source file)
(loop for line = (read-line source nil nil)
while line do
(let* ((split-line (split-str line " "))
(header (car split-line))
(rest (cdr split-line)))
(cond ((string= header "v")
(dolist (vertex rest)
(vector-push-extend (parse-float vertex) vertices)))
((string= header "f")
(dolist (face rest)
(let ((element (parse-integer (car (split-str face "/")))))
(vector-push-extend (- element 1) elements))))))))))
(defun main ()
(load-obj "tortoise.obj")
(sdl2:with-init (:everything)
(sdl2:gl-set-attr :context-profile-mask 0)
(sdl2:gl-set-attr :context-major-version 3)
(sdl2:gl-set-attr :context-minor-version 3)
(sdl2:with-window (win :flags `(:shown :opengl))
(sdl2:with-gl-context (gl-context win)
(sdl2:gl-make-current win gl-context)
(gl:viewport 0 0 800 600)
(gl:clear-color 0.957 0.376 0.286 1.0)
(let ((glarray (gl:alloc-gl-array :float (length vertices)))
(glarray-2 (gl:alloc-gl-array :unsigned-short (length elements))))
(dotimes (i (length elements))
(setf (gl:glaref glarray-2 i) (aref elements i)))
(dotimes (i (length vertices))
(setf (gl:glaref glarray i) (aref vertices i)))
(let ((vbo (gl:gen-buffer))
(vao (gl:gen-vertex-array))
(ebo (gl:gen-buffer)))
(gl:bind-vertex-array vao)
(gl:bind-buffer :array-buffer vbo)
(gl:buffer-data :array-buffer :static-draw glarray)
(gl:free-gl-array glarray)
(gl:bind-buffer :element-array-buffer ebo)
(gl:buffer-data :element-array-buffer :static-draw glarray-2)
(gl:vertex-attrib-pointer 0 4 :float nil 0 0)
(gl:enable-vertex-attrib-array 0)
(let ((vertex-shader (gl:create-shader :vertex-shader))
(fragment-shader (gl:create-shader :fragment-shader)))
(gl:shader-source vertex-shader *vertex-shader*)
(gl:shader-source fragment-shader *fragment-shader*)
(gl:compile-shader vertex-shader)
(gl:compile-shader fragment-shader)
(print (gl:get-shader-info-log vertex-shader))
(print (gl:get-shader-info-log fragment-shader))
(let ((program (gl:create-program)))
(gl:attach-shader program vertex-shader)
(gl:attach-shader program fragment-shader)
(gl:link-program program)
(gl:delete-shader vertex-shader)
(gl:delete-shader fragment-shader)
(gl:use-program program)))
(sdl2:with-event-loop (:method :poll)
(:idle ()
(gl:clear :color-buffer)
(gl:bind-vertex-array vao)
(gl:draw-elements :triangles glarray-2)
(gl:flush)
(sdl2:gl-swap-window win))
(:quit () t))))))))
I've experimented with multiple obj files, and the results are the same; nothing is drawn to the screen. I've looked at some of the other SO posts and haven't found anything particularly helpful and can't really think of anything that would be causing this.
In your case the vertex array is an array with coordinates of 3 components (x, y, z). So the "size" parameter of gl:vertex-attrib-pointer has to be 3 instead of 4:
(gl:vertex-attrib-pointer 0 3 :float nil 0 0)
Note, by using a size of 4, the specification of the vertex coordinates is misaligned. And at the end the vertex array is accessed out of bounds.
Your assumption about the indices of the obj file may be wrong.
In general a obj file looks like this:
v -1.000000 0.000000 1.000000
v 1.000000 0.000000 1.000000
v -1.000000 0.000000 -1.000000
v 1.000000 0.000000 -1.000000
vt 0.000000 0.000000
vt 1.000000 0.000000
vt 0.000000 1.000000
vt 1.000000 1.000000
vn 0.0000 1.0000 0.0000
f 1/1/1 2/2/1 4/4/1
f 1/1/1 4/4/1 3/3/1
It consists of vertex coordinates (v with 3 components), texture coordinates (vt 2 components) and normal vectors (vn 3 components).
Further there are the faces (f). Each faces specifies a single triangle, with 3 vertex coordinates and its attributes.
Each vertex consists of three indices the 1st one is the index of the vertex coordinate, the 2nd one is the index of the texture coordinate and the 3d one is the index of the normal vector.
This meanse that the following face
f 1/1/1 2/2/1 4/4/1
defines a single triangle (with 3 vertices) where the indices of the vertex coordiantes (1st indices) are
1// 2// 4//
the indices of the coresponding texture coordinates (2nd indices) are
/1/ /2/ /4/
and the indices of the coresponding normal vectors (3rd indices) are
//1 //1 //1
You may try
vertices #(-0.707 -0.5 0.0
0.707 -0.5 0.0
0.0 1.0 0.0)
instead of the orignal vertex array and
elements #(0 1 2)
instead of the orignal indices, to draw a single triangle, for debug reasons.
Related
this topic has been discussed quite a few times. There are a lot of information on the memory layout of matrices in OpenGL on the internet. Sadly different sources often contradict each other.
My question boils down to:
When I have three base vectors of my matrix bx, by and bz. If I want to make a matrix out of them to plug them into a shader, how are they laid out in memory?
Lets clarify what I mean by base vector, because I suspect this can also mean different things:
When I have a 3D model, that is Z-up and I want to lay it down flat in my world space along the X-axis, then bz is [1 0 0]. I.e. a vertex [0 0 2] in model space will be transformed to [2 0 0] when that vertex is multiplied by my matrix that has bz as the base vector for the Z-axis.
Coming to OpenGL matrix memory layout:
According to the GLSL spec (GLSL Spec p.110) it says:
vec3 v, u;
mat3 m;
u = v * m;
is equivalent to
u.x = dot(v, m[0]); // m[0] is the left column of m
u.y = dot(v, m[1]); // dot(a,b) is the inner (dot) product of a and b
u.z = dot(v, m[2]);
So, in order to have best performance, I should premultiply my vertices in the vertex shader (that way the GPU can use the dot product and so on):
attribute vec4 vertex;
uniform mat4 mvp;
void main()
{
gl_Position = vertex * mvp;
}
Now OpenGL is said to be column-major (GLSL Spec p 101). I.e. the columns are laid out contiguously in memory:
[ column 0 | column 1 | column 2 | column 3 ]
[ 0 1 2 3 | 4 5 6 7 | 8 9 10 11 | 12 13 14 15 ]
or:
[
0 4 8 12,
1 5 9 13,
2 6 10 14,
3 7 11 15,
]
This would mean that I have to store my base vectors in the rows like this:
bx.x bx.y bx.z 0
by.x by.y by.z 0
bz.x bz.y bz.z 0
0 0 0 1
So for my example with the 3D model that I want to lay flat down, it has the base vectors:
bx = [0 0 -1]
by = [0 1 0]
bz = [1 0 0]
The model vertex [0 0 2] from above would be transformed like dis in the vertex shader:
// m[0] is [ 0 0 1 0]
// m[1] is [ 0 1 0 0]
// m[2] is [-1 0 0 0]
// v is [ 0 0 2 1]
u.x = dot([ 0 0 2 1], [ 0 0 1 0]);
u.y = dot([ 0 0 2 1], [ 0 1 0 0]);
u.z = dot([ 0 0 2 1], [-1 0 0 0]);
// u is [ 2 0 0]
Just as expected!
On the contrary:
This: Correct OpenGL matrix format?
SO question and consequently the OpenGL Faq states:
For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16 as described in section 2.11.2 of the OpenGL 2.1 Specification.
This says that my base vectors should be laid out in columns like this:
bx.x by.x bz.x 0
bx.y by.y bz.y 0
bx.z by.z bz.z 0
0 0 0 1
To me these two sources which both are official documentation from Khronos seem to contradict each other.
Can somebody explain this to me? Have I made a mistake? Is there indeed some wrong information?
The FAQ is correct, it should be:
bx.x by.x bz.x 0
bx.y by.y bz.y 0
bx.z by.z bz.z 0
0 0 0 1
and it's your reasoning that is flawed.
Assuming that your base vectors bx, by, bz are the model basis given in world coordinates, then the transformation from the model-space vertex v to the world space vertex Bv is given by linear combination of the base vectors:
B*v = bx*v.x + by*v.y + bz*v.z
It is not a dot product of b with v. Instead it's the matrix multiplication where B is of the above form.
Taking a dot product of a vertex u with bx would answer the inverse question: given a world-space u what would be its coordinates in the model space along the axis bx? Therefore multiplying by the transposed matrix transpose(B) would give you the transformation from world space to model space.
Does exist a function in LISP for making a sequence of integers like (0 1 2 3)?
I found make-sequence, but I didn't find out how to make a sequence of integers.
I tried make-list and nothing.
I know that in Scheme exists (build-list 5 (lambda (x) x)). I tried to change the build-list with make-list, but it didn't work.
Some ideas? Thanks
Edit: I need something like make-list 5 ==> (0 1 2 3 4)
Simply done with loop:
(loop :for n :below 10 :collect n)
; ==> (0 1 2 3 4 5 6 7 8 9)
The Alexandria library, which is intended to work on any conforming implementation of Common Lisp, defines iota:
(iota 5)
=> (0 1 2 3 4)
You can also customize start and step:
(iota 3 :start 1 :step 1.0)
=> (1.0 2.0 3.0)
But often you do not need to actually produce the list, you just want to iterate over the given range. That's why there is also map-iota:
(map-iota #'print 3 :start 1 :step 1.0)
=> 3
In such cases you can of course use LOOP:
(loop for i from 1.0 below 22 by 1.5 do (print i))
Instead of do, you can also collect and obtain a list; this is a bit more verbose than iota, but easier to customize.
Lets see if can still write mac lisp of the top of my head:
(defun foo (num acc)
(if (eq num 0)
acc
(foo (- num 1) (cons num acc))))
(foo 5 nil)
should be
(1 2 3 4 5)
In lisp, I am appending lists as:
(setq newlist (append (side a b)(this a b) (that a b) ))
This appends all the required list as: (1 0 0 0 2 0 4 0 6 0)
but what I want is something like this: ((1 0)(0 0)(2 0)(4 0)(6 0))
What should I do to get the required format. Please post code examples in lisp.
So in fact you just need to restructure the elements after you have appended it:
(loop :for (e1 e2)
:on '(1 0 0 0 2 0 4 0 6 0)
:by #'cddr
:collect (list e1 e2))
; ==> ((1 0) (0 0) (2 0) (4 0) (6 0))
Suggested reading is LOOP for black belts, the section you should pay attention to which I've used here is "Looping Over Collections and Packages" and "Destructuring Variables". This is probably the chapter from Practical Common Lisp I read the most. The whole book is very good so every lisper should know about it.
description:
i have a list(named large-number-list) which contains many number in it,and i want get the sum of these number.
now i divide the list into three element as one group to calculate(this group will be calculated in a action of agent), and put the sum of these three element into a vector(named result).
at last,i accumulate the element in the vector together.
code as following:
;use agent to calculate many number
(def result (agent []))
(def large-number-list [1 2 3 4 5 6 7 8 9 10 11 12]);assume that large-number-list contains many number in it
(defn doin3 [col do-fn]
(let [[x1 x2 x3 & rest-elem] col
rest-len (count rest-elem)]
(println "x1 x2 x3" x1 x2 x3)
(println "rest-len is " rest-len)
(do-fn x1 x2 x3)
(when (> rest-len 0) (doin3 rest-elem do-fn))))
;assume that the calculate is a time-consumed operation
(defn calculate [v x1 x2 x3]
(conj v (+ x1 x2 x3)))
(doin3 large-number-list #(send result calculate %1 %2 %3))
(println "before await")
(await result)
(println "after await")
(println #result)
(def total (apply + result))
(println "total is:" total)
(shutdown-agents)
expected output:
x1 x2 x3 1 2 3
rest-len is 9
x1 x2 x3 4 5 6
rest-len is 6
x1 x2 x3 7 8 9
rest-len is 3
x1 x2 x3 10 11 12
rest-len is 0
before await
after await
total is: 78
actual output:
x1 x2 x3 1 2 3
rest-len is 9
x1 x2 x3 4 5 6
rest-len is 6
x1 x2 x3 7 8 9
rest-len is 3
x1 x2 x3 10 11 12
rest-len is 0
before await
question:
the code run to "before await" and block,i guess that the action in agent is not finished,but why?
please let me know what is wrong with my code?
I think the problem is with this line:
(def total (apply + result))
It should be:
(def total (apply + #result))
It wasn't actually blocking, it was throwing an exception.
One more note: you should consider using recur in doin3, instead of direct call, as it's in a tail position already.
I've been tasked with writing a function that generates a table given n operators. The truth table must be in a list and each row of the table must be in separate lists (inside the main list).
I know the solution involves recursion but I just can't seem to think it through.
Can someone help me out? This is only a small part of the assignment.
Easiest way I can think of off the top of my head is to simply convert 2^n to binary and count down, then convert the output to a list.
ie for n=3:
Truth table:
a b c
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
2^3 = 8, 8 in binary = 1000, start from 1000-1 = 111 and work your way down to 0, record outputs, and voila!
If hkf's interpretation of your question is right, this should work in Racket:
#lang racket
(define (generate-table n)
(if (zero? n)
'(())
(for*/list ((y (in-list (generate-table (sub1 n))))
(x (in-list '(0 1))))
(cons x y))))
Use it like this:
(generate-table 3)
> ((0 0 0) (1 0 0) (0 1 0) (1 1 0) (0 0 1) (1 0 1) (0 1 1) (1 1 1))
Let's assume that all N operators are binary functions, like AND and OR.
;; Common Lisp
(defun truth-tables (ops)
(loop for op in ops
collecting
(loop for args in '((nil nil) (nil t) (t nil) (t t))
collecting (eval `(,op ,#args)))))
(truth-tables '(and or xor)) -> ((NIL NIL NIL T) (NIL T T T) (NIL T T NIL))
This gives you an idea. Well, here I don't have "each row of the truth table" as a sublist; I have the columns for the AND, OR and XOR truth tables, respectively. The input variable combinations are left implicit: you know that the third entry of every one corresponds to (<op> t nil). Your description of the problem is not very clear.
As you can also see, I cheated by using the Lisp operators through generated code which is dynamically evaluated.