Creating cursor trail with fragment shader - glsl

I wish to draw a simple mouse trail using fragment shaders, similar in appearance to drawing the following in processing (omitting the step of clearing the canvas). I cannot wrap my head around the setup necessary to achieve this.
// processing reference using cursor as paintbrush
void setup () {
size(400, 400);
background(255);
fill(0);
}
void draw () {
ellipse(mouseX, mouseY, 20, 20);
}
Here's my vain approach, based on this shadertoy example:
I draw a simple shape at cursor position
void main(void) {
float pct = 0.0;
pct = distance(inData.v_texcoord.xy, vec2(mouse.x, 1.-mouse.y)) * SIZE;
pct = 1.0 - pct - BRIGHTNESS;
vec3 blob = vec3(pct);
fragColor = vec4( blob, 1.0 );
}
Then my confusion begins. My thinking goes that I'd need to mix the output above with a texture containing my previous pass. This creates at least a solid trail, albeit copying the previous pass only within a set distance from the mouse position.
#shader pass 1
void main(void) {
float pct = 0.0;
pct = distance(inData.v_texcoord.xy, vec2(mouse.x, 1.-mouse.y)) * SIZE;
pct = 1.0 - pct - BRIGHTNESS;
vec3 blob = vec3(pct);
vec3 stack = texture(prevPass, inData.v_texcoord.xy).xyz;
fragColor = vec4( blob*.1 + (stack*2.), 1.0 );
}
#shader pass 2
void main(void) {
fragColor = texture(prevPass,inData.v_texcoord);
}
Frankly, I'm a little bit in the blue about how to draw without data and "stack" previous draw calls in webgl on a conceptual level, and I'm having a hard time finding beginner documentation.
I would be grateful if someone could point me towards where my code and thinking becomes faulty, or point me towards some resources.

What you need to do is:
After doing your first pass rendering (i.e. making an ellipse at the cursor position), copy the contents of the framebuffer to a different image.
Then pass this image as an sampler input to the next pass. Notice how that shadertoy example ahs 2 images.

You can make a simple HTML/Javascript trail with this code:
<!DOCTYPE html>
<style>
.trail { /* className for trail elements */
position: absolute;
height: 6px; width: 6px;
border-radius: 3px;
background: teal;
}
body {
height: 300px;
}
</style>
<body>
<script>
document.body.addEventListener("mousemove", moved);
// create, style, append trail elements
var trailElements = [];
var numOfTrailElements = 10;
for (var i = 0; i < numOfTrailElements; i++) {
var element = document.createElement('div');
element.className = 'trail';
document.body.appendChild(element);
trailElements.push(element);
}
// when mouse moves, display trail elements in wake of mouse pointer
var counter = 0; // current trail element index
function moved(event) {
trailElements[counter].style.left = event.clientX + 'px';
trailElements[counter].style.top = event.clientY + 'px';
if (counter == 9) {
counter = 0;
} else {
counter += 1;
}
}
</script>
</body>
<!doctype html>
<style>
.trail { /* className for the trail elements */
position: absolute;
height: 6px; width: 6px;
border-radius: 3px;
background: black;
}
body {
height: 300px;
}
</style>
<body>
<script>
var dots = [];
for (var i = 0; i < 12; i++) {
var node = document.createElement("div");
node.className = "trail";
document.body.appendChild(node);
dots.push(node);
}
var currentDot = 0;
addEventListener("mousemove", function(event) {
var dot = dots[currentDot];
dot.style.left = (event.pageX - 3) + "px";
dot.style.top = (event.pageY - 3) + "px";
currentDot = (currentDot + 1) % dots.length;
});
</script>
</body>

Related

Detect specific angle in image with OpenCV

I'm currently developing an application that takes images and detect a specific angle in that image.
The images always look something like this: original image.
I want to detect the angle of the bottom cone.
In order to do that i crop that image in image and use two Houghline algorithms. One for the cone and one for the table at the bottom. This works failry well and i get the correct result in 90% of the images.
result of the two algorithms
Doesnt work
Doesnt work either
My approach works for now because i can guarantee that the cone will alwys be in an angle range of 5 to 90°. So i can filter the houghlines based on their angle.
However i wonder if their is a better approach to this. This is my first time working with OpenCV, so maybe this community has some tips to improve the whole thing. Any help is appreciated!
My code for the cone so far:
public (Bitmap bmp , double angle) Calculate(Mat imgOriginal, Mat imgCropped, int Y)
{
Logging.Log("Functioncall: Calculate");
var finalAngle = 0.0;
Mat imgWithLines = imgOriginal.Clone();
how croppedImage look's
var grey = new Mat();
CvInvoke.CvtColor(imgCropped, grey, ColorConversion.Bgr2Gray);
var bilateral = new Mat();
CvInvoke.BilateralFilter(grey, bilateral, 15, 85, 15);
var blur = new Mat();
CvInvoke.GaussianBlur(bilateral, blur, new Size(5, 5), 0); // Kernel reduced from 31 to 5
var edged = new Mat();
CvInvoke.Canny(blur, edged, 0, 50);
var iterator = true;
var counter = 0;
var hlThreshhold = 28;
while (iterator &&counter<40)
{
counter++;
var threshold = hlThreshhold;
var rho = 1;
var theta = Math.PI / 180;
var lines = new VectorOfPointF();
CvInvoke.HoughLines(edged, lines, rho, theta, threshold);
var angles = CalculateAngles(lines);
if (angles.Length > 1)
{
hlThreshhold += 1;
}
if (angles.Length < 1)
{
hlThreshhold -= 1;
}
if (angles.Length == 1)
{
try
{
//Calc the more detailed position of glassLine and use it for Calc with ConeLine instead of perfect horizontal line
var glassLines = new VectorOfPointF();
var glassTheta = Math.PI / 720; // accuracy: PI / 180 => 1 degree | PI / 720 => 0.25 degree |
CvInvoke.HoughLines(edged, glassLines, rho, glassTheta, threshold);
var glassEdge = CalculateGlassEdge(glassLines);
iterator = false;
// finalAngle = angles.FoundAngle; // Anzeige der Winkel auf 2 Nachkommastellen
CvInvoke.Line(imgWithLines, new Point((int)angles.LineCoordinates[0].P1.X, (int)angles.LineCoordinates[0].P1.Y + Y), new Point((int)angles.LineCoordinates[0].P2.X, (int)angles.LineCoordinates[0].P2.Y + Y), new MCvScalar(0, 0, 255), 5);
CvInvoke.Line(imgWithLines, new Point((int)glassEdge.LineCoordinates[0].P1.X, (int)glassEdge.LineCoordinates[0].P1.Y + Y), new Point((int)glassEdge.LineCoordinates[0].P2.X, (int)glassEdge.LineCoordinates[0].P2.Y + Y), new MCvScalar(255, 255, 0), 5);
// calc Angle ConeLine and GlassLine
finalAngle = 90 + angles.LineCoordinates[0].GetExteriorAngleDegree(glassEdge.LineCoordinates[0]);
finalAngle = Math.Round(finalAngle, 1);
//Calc CrossPoint
PointF crossPoint = getCrossPoint(angles.LineCoordinates[0], glassEdge.LineCoordinates[0]);
//Draw dashed Line through crossPoint
drawDrashedLineInCrossPoint(imgWithLines, crossPoint, 30);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
finalAngle = 0.0;
imgWithLines = imgOriginal.Clone();
}
}
}
Image cropping (the table is always on the same position, so i use this position and a height parameter to only get the bottom of the cone )
public Mat ReturnCropped(Bitmap imgOriginal, int GlassDiscLine, int HeightOffset)
{
var rect = new Rectangle(0, 2500-GlassDiscLine-HeightOffset, imgOriginal.Width, 400);
return new Mat(imgOriginal.ToMat(), rect);
}

How to rotate camera around object without centering to it

I would like to make a camera rotate around object, but without shifting pivot to it's center. A good example I made with blender:
Link to gif (In this example camera rotates around cursor, but it works as an example)
So what I want is when I click a certain object, I want to rotate around it, but without centering camera pivot to objects position, basically retaining objects position on screen. I found many examples on rotating around objects center, but I can seem to find anything for my problem.
Currently I have working camera rotation and movement, but I don't know how to approach this. I am working in OpenGL with Cinder framework.
I would be grateful for a simple explanation on how would I be able to do it :)
My current code:
void HandleUICameraRotate() {
//selectedObj <- object...has position etc..
float deltaX = (mMousePos.x - mInitialMousePos.x) / -100.0f;
float deltaY = (mMousePos.y - mInitialMousePos.y) / 100.0f;
// Camera direction vector
glm::vec3 mW = glm::normalize(mInitialCam.getViewDirection());
bool invertMotion = (mInitialCam.getOrientation() * mInitialCam.getWorldUp()).y < 0.0f;
// Right axis vector
vec3 mU = normalize(cross(mInitialCam.getWorldUp(), mW));
if (invertMotion) {
deltaX = -deltaX;
deltaY = -deltaY;
}
glm::vec3 rotatedVec = glm::angleAxis(deltaY, mU) * (-mInitialCam.getViewDirection() * mInitialPivotDistance);
rotatedVec = glm::angleAxis(deltaX, mInitialCam.getWorldUp()) * rotatedVec;
mCamera.setEyePoint(mInitialCam.getEyePoint() + mInitialCam.getViewDirection() * mInitialPivotDistance + rotatedVec);
mCamera.setOrientation(glm::angleAxis(deltaX, mInitialCam.getWorldUp()) * glm::angleAxis(deltaY, mU) * mInitialCam.getOrientation());
}
This is how you can do this rotation (look at the function orbit(...) in the code below).
The basic idea is to rotate the position and the lookAt direction of the camera about the target position. When you run the code demo, use the mouse right button to select the target, and move the mouse to rotate the camera around the target.
Hit me up if you need any clarifications.
let renderer;
let canvas;
let camera;
let scene;
const objects = [];
const highlightGroup = new THREE.Group();
const xaxis = new THREE.Vector3(1, 0, 0);
const yaxis = new THREE.Vector3(0, 1, 0);
const zaxis = new THREE.Vector3(0, 0, 1);
const radius = 10;
const fov = 40;
const tanfov = Math.tan(fov * Math.PI / 360.0);
function initCamera() {
const aspect = 2; // the canvas default
const near = 0.1;
const far = 2000;
camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.set(0, 0, 500);
}
function initLights() {
const color = 0xFFFFFF;
const intensity = 1;
const light = new THREE.PointLight(color, intensity);
light.position.set(0,0,200)
scene.add(light);
const light1 = new THREE.PointLight(color, intensity);
light1.position.set(100,200,-200)
scene.add(light1);
}
function initObjects() {
const geometry = new THREE.SphereBufferGeometry( radius, 13, 13 );
const yellowMat = new THREE.MeshPhongMaterial( {color: 0xffff00} );
const redMat = new THREE.MeshPhongMaterial( {color: 0xff0000} );
const greenMat = new THREE.MeshPhongMaterial( {color: 0x00ff00} );
const blueMat = new THREE.MeshPhongMaterial( {color: 0x0000ff} );
const magentaMat = new THREE.MeshPhongMaterial( {color: 0xff00ff} );
const cyanMat = new THREE.MeshPhongMaterial( {color: 0x00ffff} );
const lblueMat = new THREE.MeshPhongMaterial( {color: 0x6060ff} );
let sphere
sphere = new THREE.Mesh( geometry, yellowMat );
sphere.position.set(0, 0, 0);
objects.push(sphere);
scene.add(sphere)
sphere = new THREE.Mesh( geometry, redMat );
sphere.position.set(50, 0, 0);
objects.push(sphere);
scene.add(sphere)
sphere = new THREE.Mesh( geometry, blueMat );
sphere.position.set(0, 0, 50);
objects.push(sphere);
scene.add(sphere)
sphere = new THREE.Mesh( geometry, greenMat );
sphere.position.set(0, 50, 0);
objects.push(sphere);
scene.add(sphere)
sphere = new THREE.Mesh( geometry, magentaMat );
sphere.position.set(0, -50, 0);
objects.push(sphere);
scene.add(sphere)
sphere = new THREE.Mesh( geometry, cyanMat );
sphere.position.set(-50, 0, 0);
objects.push(sphere);
scene.add(sphere);
sphere = new THREE.Mesh( geometry, lblueMat );
sphere.position.set(0, 0, -50);
objects.push(sphere);
scene.add(sphere);
scene.add( highlightGroup );
}
function createRenderLoop() {
function render(time) {
time *= 0.001;
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
}
function initEventHandlers() {
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
}
window.addEventListener( 'resize', onWindowResize, false );
onWindowResize()
canvas.addEventListener('contextmenu', event => event.preventDefault());
}
function initOrbitCam() {
const diffToAngle = 0.01;
const hscale = 1.05;
const highlightMat = new THREE.MeshBasicMaterial({
color: 0xffffff,
transparent: true,
opacity: 0.2,
});
let isMouseButtonDown = -1;
let mouseDownPos;
let rightDownDragging = false;
let savedCamPos;
let savedCamLookAt = new THREE.Vector3();
let orbitTarget;
function absScrDist(pos1, pos2) {
return Math.abs(pos1[0] - pos2[0]) + Math.abs(pos1[1] - pos2[1]);
}
function addHighlight(obj) {
const objCopy = obj.clone();
objCopy.material = highlightMat;
objCopy.scale.set(hscale, hscale, hscale);
highlightGroup.add(objCopy);
}
function emptyHighlightGroup() {
highlightGroup.children.slice(0).forEach(child => {
highlightGroup.remove(child);
})
}
function getTarget(camera, event) {
const [x, y] = [event.offsetX, event.offsetY];
const [cw, ch] = [canvas.width, canvas.height];
const mouse3D = new THREE.Vector3( ( x / cw ) * 2 - 1,
-( y / ch ) * 2 + 1,
0.5 );
const raycaster = new THREE.Raycaster();
raycaster.setFromCamera( mouse3D, camera );
const intersects = raycaster.intersectObjects( objects );
console.log(intersects)
if ( intersects.length > 0 ) {
addHighlight(intersects[0].object);
return intersects[0].object.position.clone();
}
const nv = new THREE.Vector3();
camera.getWorldDirection(nv);
return camera.position.clone().add(nv.clone().multiplyScalar(500));
}
function onCanvasMouseDown(event) {
isMouseButtonDown = event.button;
mouseDownPos = [event.offsetX, event.offsetY];
orbitTarget = getTarget(camera, event);
event.preventDefault();
event.stopPropagation();
}
canvas.addEventListener("mousedown", onCanvasMouseDown, false);
function onCanvasMouseUp(event) {
isMouseButtonDown = -1;
rightDownDragging = false;
emptyHighlightGroup();
event.preventDefault();
event.stopPropagation();
}
canvas.addEventListener("mouseup", onCanvasMouseUp, false);
function onCanvasMouseMove(event) {
if (rightDownDragging === false) {
if (isMouseButtonDown === 2) {
const currPos = [event.clientX, event.clientY];
const dragDist = absScrDist(mouseDownPos, currPos);
if (dragDist >= 5) {
rightDownDragging = true;
savedCamPos = camera.position.clone();
camera.getWorldDirection( savedCamLookAt );
}
}
} else {
const xdiff = event.clientX - mouseDownPos[0];
const ydiff = event.clientY - mouseDownPos[1];
const yAngle = xdiff * diffToAngle;
const xAngle = ydiff * diffToAngle;
orbit(-xAngle, -yAngle, savedCamPos.clone(), savedCamLookAt.clone(), orbitTarget)
}
}
canvas.addEventListener("mousemove", onCanvasMouseMove, false);
function orbit(xRot, yRot, camPos, camLookAt, target) {
const newXAxis = camLookAt.clone();
const lx = camLookAt.x;
const lz = camLookAt.z;
newXAxis.x = -lz;
newXAxis.z = lx;
newXAxis.y = 0;
const newCamPos = camPos
.sub(target)
.applyAxisAngle( newXAxis, xRot )
.applyAxisAngle( yaxis, yRot )
.add(target);
camera.position.set(...newCamPos.toArray());
const relLookAt = camLookAt
.applyAxisAngle( newXAxis, xRot )
.applyAxisAngle( yaxis, yRot )
.add(newCamPos);
camera.lookAt(...relLookAt.toArray());
camera.updateProjectionMatrix();
}
}
function setup() {
canvas = document.querySelector('#c');
renderer = new THREE.WebGLRenderer({canvas});
scene = new THREE.Scene();
initCamera();
initLights();
initObjects();
initEventHandlers();
initOrbitCam();
createRenderLoop();
}
setup();
#c {
width: 100vw;
height: 100vh;
display: block;
}
<canvas id="c"></canvas>
<script src="https://unpkg.com/three#0.85.0/examples/js/libs/stats.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/110/three.min.js"></script>
<script src="https://unpkg.com/three#0.85.0/examples/js/controls/OrbitControls.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/gsap/3.2.5/gsap.min.js"></script>
I don't exactly understand what you want to do... But maybe this helps...
Transformations in 3d space happen through matrcices, there are different kind of transformation matrices (i.e. translation, scale, rotation, ...) if you want to rotate an object around an axis which is not its own, you will have to move the object to this axis, rotate it this position and than move it back. What will happen is you multply the coordinates of whatever object you want to rotate around something, by the translation matrix, then mutltiply with a rotation matrix and than again multiple with a translation matrix. Luckily according to the rules of linear algebra, we can simply multiply all of these matrices in order, than multply it with the coordinates...
instead of this:
translationMatrix * somePosition;
rotationMatrix * somePosition;
anotherTranslationMatrix * somePosition;
this:
translationMatrix * rotationMatrix * anotherTranslationMatrix * somePosition;
It is a bit vague to explain this like that, but the idea is there. This might seem a like a lot of work, but GPUs are highly optimised to perform matrix multiplications, so if you succeed in lettling the GPU perform these, it will not be an issue performance wise...
If you already knew this: welp...
If you did not know this, research some linear algebra, specifically: coordinate spaces, matrix multiplication and transformation matrices.
cheers!

Rendering rapidly-changing arbitrary-sized meshes with Metal and Swift 3

I am trying to render random meshes to an MTKView as fast as the device will allow.. Pretty much all the metal examples I have found show how to draw a piece of geometry for which the buffer size is defined only once (i.e. fixed):
let dataSize = vertexCount * MemoryLayout<VertexWithColor>.size // size of the vertex data in bytes
let vertexBuffer: MTLBuffer = device!.makeBuffer(bytes: verticesWithColorArray, length: dataSize, options: []) // create a new buffer on the GPU
The goal is to eventually generate meshes on the fly given some point cloud input. I've set up drawing to be triggered with a tap as follows:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let touchPoint = touch.location(in: view)
print ("...touch \(touchPoint)")
autoreleasepool {
delaunayView.setupTriangles()
delaunayView.renderTriangles()
}
}
}
I can get the screen to refresh with new triangles, as long as I don't tap too frequently. However, if I tap too quickly (like say a double tap), the app crashes with the following error:
[CAMetalLayerDrawable texture] should not be called after presenting the drawable.
Performance will obviously be linked to the number of triangles drawn. Besides getting the app to function stably, just as important is the question, how can I best take advantage of the GPU to push as many triangles as possible? (In its current state, the app draws about 30,000 triangles at 3 fps on an iPad Air 2).
Any pointers/gotchas for speed and frame rate would be most welcome
The whole project can be found here:
Also, below is the pertinent updated metal class
import Metal
import MetalKit
import GameplayKit
protocol MTKViewDelaunayTriangulationDelegate: NSObjectProtocol{
func fpsUpdate (fps: Int)
}
class MTKViewDelaunayTriangulation: MTKView {
//var kernelFunction: MTLFunction!
var pipelineState: MTLComputePipelineState!
var defaultLibrary: MTLLibrary! = nil
var commandQueue: MTLCommandQueue! = nil
var renderPipeline: MTLRenderPipelineState!
var errorFlag:Bool = false
var verticesWithColorArray : [VertexWithColor]!
var vertexCount: Int
var verticesMemoryByteSize:Int
let fpsLabel = UILabel(frame: CGRect(x: 0, y: 0, width: 400, height: 20))
var frameCounter: Int = 0
var frameStartTime = CFAbsoluteTimeGetCurrent()
weak var MTKViewDelaunayTriangulationDelegate: MTKViewDelaunayTriangulationDelegate?
////////////////////
init(frame: CGRect) {
vertexCount = 100000
//verticesMemoryByteSize = vertexCount * MemoryLayout<VertexWithColor>.size
verticesMemoryByteSize = vertexCount * MemoryLayout<VertexWithColor>.stride // apple recommendation
super.init(frame: frame, device: MTLCreateSystemDefaultDevice())
setupMetal()
//setupTriangles()
//renderTriangles()
}
required init(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
/*
override func draw(_ rect: CGRect) {
step() // needed to update frame counter
autoreleasepool {
setupTriangles()
renderTriangles()
}
} */
func step() {
frameCounter += 1
if frameCounter == 100
{
let frametime = (CFAbsoluteTimeGetCurrent() - frameStartTime) / 100
MTKViewDelaunayTriangulationDelegate?.fpsUpdate(fps: Int(1 / frametime)) // let the delegate know of the frame update
print ("...frametime: \((Int(1/frametime)))")
frameStartTime = CFAbsoluteTimeGetCurrent() // reset start time
frameCounter = 0 // reset counter
}
}
func setupMetal(){
// Steps required to set up metal for rendering:
// 1. Create a MTLDevice
// 2. Create a Command Queue
// 3. Access the custom shader library
// 4. Compile shaders from library
// 5. Create a render pipeline
// 6. Set buffer size of objects to be drawn
// 7. Draw to pipeline through a renderCommandEncoder
// 1. Create a MTLDevice
guard let device = MTLCreateSystemDefaultDevice() else {
errorFlag = true
//particleLabDelegate?.particleLabMetalUnavailable()
return
}
// 2. Create a Command Queue
commandQueue = device.makeCommandQueue()
// 3. Access the custom shader library
defaultLibrary = device.newDefaultLibrary()
// 4. Compile shaders from library
let fragmentProgram = defaultLibrary.makeFunction(name: "basic_fragment")
let vertexProgram = defaultLibrary.makeFunction(name: "basic_vertex")
// 5a. Define render pipeline settings
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
renderPipelineDescriptor.vertexFunction = vertexProgram
renderPipelineDescriptor.sampleCount = self.sampleCount
renderPipelineDescriptor.colorAttachments[0].pixelFormat = self.colorPixelFormat
renderPipelineDescriptor.fragmentFunction = fragmentProgram
// 5b. Compile renderPipeline with above renderPipelineDescriptor
do {
renderPipeline = try device.makeRenderPipelineState(descriptor: renderPipelineDescriptor)
} catch let error as NSError {
print("render pipeline error: " + error.description)
}
// initialize counter variables
frameStartTime = CFAbsoluteTimeGetCurrent()
frameCounter = 0
} // end of setupMetal
/// Generate set of vertices for our triangulation to use
func generateVertices(_ size: CGSize, cellSize: CGFloat, variance: CGFloat = 0.75, seed: UInt64 = numericCast(arc4random())) -> [Vertex] {
// How many cells we're going to have on each axis (pad by 2 cells on each edge)
let cellsX = (size.width + 4 * cellSize) / cellSize
let cellsY = (size.height + 4 * cellSize) / cellSize
// figure out the bleed widths to center the grid
let bleedX = ((cellsX * cellSize) - size.width)/2
let bleedY = ((cellsY * cellSize) - size.height)/2
let _variance = cellSize * variance / 4
var points = [Vertex]()
let minX = -bleedX
let maxX = size.width + bleedX
let minY = -bleedY
let maxY = size.height + bleedY
let generator = GKLinearCongruentialRandomSource(seed: seed)
for i in stride(from: minX, to: maxX, by: cellSize) {
for j in stride(from: minY, to: maxY, by: cellSize) {
let x = i + cellSize/2 + CGFloat(generator.nextUniform()) + CGFloat.random(-_variance, _variance)
let y = j + cellSize/2 + CGFloat(generator.nextUniform()) + CGFloat.random(-_variance, _variance)
points.append(Vertex(x: Double(x), y: Double(y)))
}
}
return points
} // end of generateVertices
func setupTriangles(){
// generate n random triangles
///////////////////
verticesWithColorArray = [] // empty out vertex array
for _ in 0 ... vertexCount {
//for vertex in vertices {
let x = Float(Double.random(-1.0, 1.0))
let y = Float(Double.random(-1.0, 1.0))
let v = VertexWithColor(x: x, y: y, z: 0.0, r: Float(Double.random()), g: Float(Double.random()), b: Float(Double.random()), a: 0.0)
verticesWithColorArray.append(v)
} // end of for _ in
} // end of setupTriangles
func renderTriangles(){
// 6. Set buffer size of objects to be drawn
//let dataSize = vertexCount * MemoryLayout<VertexWithColor>.size // size of the vertex data in bytes
let dataSize = vertexCount * MemoryLayout<VertexWithColor>.stride // apple recommendation
let vertexBuffer: MTLBuffer = device!.makeBuffer(bytes: verticesWithColorArray, length: dataSize, options: []) // create a new buffer on the GPU
let renderPassDescriptor: MTLRenderPassDescriptor? = self.currentRenderPassDescriptor
// If the renderPassDescriptor is valid, begin the commands to render into its drawable
if renderPassDescriptor != nil {
// Create a new command buffer for each tessellation pass
let commandBuffer: MTLCommandBuffer? = commandQueue.makeCommandBuffer()
// Create a render command encoder
// 7a. Create a renderCommandEncoder four our renderPipeline
let renderCommandEncoder: MTLRenderCommandEncoder? = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptor!)
renderCommandEncoder?.label = "Render Command Encoder"
//////////renderCommandEncoder?.pushDebugGroup("Tessellate and Render")
renderCommandEncoder?.setRenderPipelineState(renderPipeline!)
renderCommandEncoder?.setVertexBuffer(vertexBuffer, offset: 0, at: 0)
// most important below: we tell the GPU to draw a set of triangles, based on the vertex buffer. Each triangle consists of three vertices, starting at index 0 inside the vertex buffer, and there are vertexCount/3 triangles total
//renderCommandEncoder?.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertexCount, instanceCount: vertexCount/3)
renderCommandEncoder?.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertexCount)
///////////renderCommandEncoder?.popDebugGroup()
renderCommandEncoder?.endEncoding() // finalize renderEncoder set up
commandBuffer?.present(self.currentDrawable!) // needed to make sure the new texture is presented as soon as the drawing completes
// 7b. Render to pipeline
commandBuffer?.commit() // commit and send task to gpu
} // end of if renderPassDescriptor
}// end of func renderTriangles()
} // end of class MTKViewDelaunayTriangulation
You shouldn't be calling setupTriangles() or, especially, renderTriangles() from init(). Nor, as per your comment, from touchesBegan(). In general, you should only attempt to draw when the framework calls your override of draw(_:).
How you update for user events depends on the drawing mode of the MTKView, as explained in the class overview. By default, your draw(_:) method is called periodically. In this mode, you shouldn't have to do anything about drawing in touchesBegan(). Just update your class's internal state about what it should draw. The actual drawing will happen automatically a short time later.
If you've configured the view to redraw after setNeedsDisplay(), then touchesBegan() should update internal state and then call setNeedsDisplay(). It shouldn't attempt to draw immediately. A short time after you return control back to the framework (i.e. return from touchesBegan()), it will call draw(_:) for you.
If you've configured the view to only draw when you explicitly call draw(), then you would do that after updating internal state.

GLSL optimization. What is faster?

I'm using OpenGL ES.
And have two types of calculation "dir" vector, which code is fastest?
attribute vec2 order;
code1:
if( abs(sinA) < 0.2 ) {
if(order.x == 1.0){
dir = sNormalPrev;
} else {
dir = sNormalNext;
}
} else {
dir *= order.x / sinA;
}
code 2:
float k = step(0.2, abs(sinA));
dir = k * dir * order.x / sinA - (k-1.0) * (step(1.0, order.x + 1.0) * sNormalPrev + step(1.0, -order.x + 1.0) * sNormalNext);
Writing a test I don't see much of a difference
var iterationsPerTiming = 40;
var gl = document.createElement("canvas").getContext("webgl");
gl.canvas.width = 1;
gl.canvas.height = 1;
var programInfo1 = twgl.createProgramInfo(gl, ["vs1", "fs"])
var programInfo2 = twgl.createProgramInfo(gl, ["vs2", "fs"]);
var count = new Float32Array(1000000);
for (var i = 0; i < count.length; ++i) {
count[i] = i % 3 / 2;
}
var arrays = {
vertexId: {
data: count, numComponents: 1,
},
};
var bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);
iterateTest(programInfo1, 10) // prime this path
.then(function() { return iterateTest(programInfo2, 10)}) // prime this path
.then(function() { return iterateTest(programInfo1, 20)})
.then(log)
.then(function() { return iterateTest(programInfo2, 20)})
.then(log);
function iterateTest(programInfo, times) {
return new Promise(function(resolve, reject) {
var timings = [];
var totalTime = 0;
function runNextIteration() {
if (times) {
--times;
timings.push(test(programInfo, iterationsPerTiming));
setTimeout(runNextIteration, 1);
} else {
var totalTime = 0;
var msgs = timings.map(function(timing, ndx) {
totalTime += timing;
return "" + ndx + ": " + timing.toFixed(3);
});
msgs.push("average timing: " + (totalTime / timings.length).toFixed(3));
resolve(msgs.join("\n"));
}
}
runNextIteration();
});
}
function test(programInfo, iterations) {
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
var startTime = performance.now();
for (var i = 0; i < iterations; ++i) {
twgl.drawBufferInfo(gl, gl.TRIANGLES, bufferInfo, count.length);
}
// this effectively does a gl.finish. It's not useful for real timing
// beacuse it stalls the pipeline but it should be useful for
// comparing times since the stalling would be included in both
var temp = new Uint8Array(4);
gl.readPixels(0, 0, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, temp);
return performance.now() - startTime;
}
function log(msg) {
var div = document.createElement("pre");
div.appendChild(document.createTextNode(msg));
document.body.appendChild(div);
return Promise.resolve();
}
html, body { font-family: monospace; }
<script src="https://twgljs.org/dist/twgl.min.js"></script>
<script id="vs1" type="notjs">
attribute float vertexId;
void main() {
vec2 order = vec2(vertexId, 0);
float sinA = vertexId;
vec3 dir = vec3(0);
vec3 sNormalPrev = vec3(1);
vec3 sNormalNext = vec3(-1);
if( abs(sinA) < 0.2 ) {
if(order.x == 1.0){
dir = sNormalPrev;
} else {
dir = sNormalNext;
}
} else {
dir *= order.x / sinA;
}
gl_Position = vec4(dir, 1.0); // have to use dir
gl_PointSize = 1.0;
}
</script>
<script id="vs2" type="notjs">
attribute float vertexId;
void main() {
vec2 order = vec2(vertexId, 0);
float sinA = vertexId;
vec3 dir = vec3(0);
vec3 sNormalPrev = vec3(1);
vec3 sNormalNext = vec3(-1);
float k = step(0.2, abs(sinA));
dir = k * dir * order.x / sinA - (k-1.0) * (step(1.0, order.x + 1.0) * sNormalPrev + step(1.0, -order.x + 1.0) * sNormalNext);
gl_Position = vec4(dir, 1.0); // have to use dir
gl_PointSize = 1.0;
}
</script>
<script id="fs" type="notjs">
precision mediump float;
void main() {
gl_FragColor = vec4(1);
}
</script>
Maybe my test is bad. Tested on an early 2015 macbook pro and an iPhone6s+
GPU cores are mostly wide SIMD units and they handle if-statements via masking. Depending on the GPU architecture the shader compiler converts control statements to masking operations pretty much the same way you did with your code.
On PCs the GPU driver has enough processing power to properly optimize shaders, so your optimization makes no difference. According to this blog post from 2010 your optimization would make sense on mobile platforms. I assume that this isn't more the case with todays modern smartphones as they have enough processing power to properly optimize shaders and also the driver matured over time.
You can also try out the tool GLSL optimizer that is also mentioned in the blog post from earlier. Also some GPU vendors provide tools for profiling shaders.

How to animate and propertly intepolate a QML rotation transform in 3D

This code sample here:
import QtQuick 2.0
Item {
width: 200; height: 200
Rectangle {
width: 100; height: 100
anchors.centerIn: parent
color: "#00FF00"
Rectangle {
color: "#FF0000"
width: 10; height: 10
anchors.top: parent.top
anchors.right: parent.right
}
}
}
Will produce this output:
Now I want to apply a 3D rotation from the center of this green rectangle. First, I want to rotate on X by -45 degrees (bowing down), then on Y by -60 degrees (turning left).
I used the following c++ code snipped using GLM on the side to help me calculate the axis and angle:
// generate rotation matrix from euler in X-Y-Z order
// please note that GLM uses radians, not degrees
glm::mat4 rotationMatrix = glm::eulerAngleXY(glm::radians(-45.0f), glm::radians(-60.0f));
// convert the rotation matrix into a quaternion
glm::quat quaternion = glm::toQuat(rotationMatrix);
// extract the rotation axis from the quaternion
glm::vec3 axis = glm::axis(quaternion);
// extract the rotation angle from the quaternion
// and also convert it back to degrees for QML
double angle = glm::degrees(glm::angle(quaternion));
The output of this little C++ program gave me an axis of {-0.552483, -0.770076, 0.318976} and an angle of 73.7201. So I updated my sample code to this:
import QtQuick 2.0
Item {
width: 200; height: 200
Rectangle {
width: 100; height: 100
anchors.centerIn: parent
color: "#00FF00"
Rectangle {
color: "#FF0000"
width: 10; height: 10
anchors.top: parent.top
anchors.right: parent.right
}
transform: Rotation {
id: rot
origin.x: 50; origin.y: 50
axis: Qt.vector3d(-0.552483, -0.770076, 0.318976)
angle: 73.7201
}
}
}
Which give me exactly what I wanted to see:
So far so good. Now comes the hard part. How do I animate this? For example, if I want to go from {45.0, 60.0, 0} to {45.0, 60.0, 90.0}. In other word, I want to animate from here
to here
I plugged that target rotation here
// generate rotation matrix from euler in X-Y-Z order
// please note that GLM uses radians, not degrees
glm::mat4 rotationMatrix = glm::eulerAngleXYZ(glm::radians(-45.0f), glm::radians(-60.0f), glm::radians(90.0f);
// convert the rotation matrix into a quaternion
glm::quat quaternion = glm::toQuat(rotationMatrix);
// extract the rotation axis from the quaternion
glm::vec3 axis = glm::axis(quaternion);
// extract the rotation angle from the quaternion
// and also convert it back to degrees for QML
double angle = glm::degrees(glm::angle(quaternion));
which gave me an axis of {-0.621515, -0.102255, 0.7767} and an angle of 129.007
So I added this animation to my sample
ParallelAnimation {
running: true
Vector3dAnimation {
target: rot
property: "axis"
from: Qt.vector3d(-0.552483, -0.770076, 0.318976)
to: Qt.vector3d(-0.621515, -0.102255, 0.7767)
duration: 4000
}
NumberAnimation {
target: rot;
property: "angle";
from: 73.7201; to: 129.007;
duration: 4000;
}
}
Which 'almost' works. The problem is, if you try it, you will see that the rotation goes completely off its desired rotation axis for the first half of the animation, but fixes itself for the last half of the animation. The starting rotation is good, the target rotation is good, but whatever that happens in between is not good enough. It is better if I use smaller angles like 45 degrees instead of 90 degrees, and is going to be worst if I use larger angles like 180 degrees instead of 45 degrees, where it just spins in random directions until it reaches its final targets.
How do I get this animation to look right between the start rotation and the target rotation?
------------------- EDIT -------------------
I am adding one more criteria: The answer I am looking for must absolutely provide an identical output as the screenshots I provided above.
For example, splitting the 3 rotation axis in 3 separate rotation transforms doesn't give me the right results
transform: [
Rotation {
id: zRot
origin.x: 50; origin.y: 50;
angle: 0
},
Rotation {
id: xRot
origin.x: 50; origin.y: 50;
angle: -45
axis { x: 1; y: 0; z: 0 }
},
Rotation {
id: yRot
origin.x: 50; origin.y: 50;
angle: -60
axis { x: 0; y: 1; z: 0 }
}
]
Will give me this:
Which is incorrect.
I solved my own problem. I completely forgot that Qt doesn't do spherical linear interpolation!!! As soon as I did my own slerp function, it all worked perfectly.
Here's my code for those who are seeking the answer:
import QtQuick 2.0
Item {
function angleAxisToQuat(angle, axis) {
var a = angle * Math.PI / 180.0;
var s = Math.sin(a * 0.5);
var c = Math.cos(a * 0.5);
return Qt.quaternion(c, axis.x * s, axis.y * s, axis.z * s);
}
function multiplyQuaternion(q1, q2) {
return Qt.quaternion(q1.scalar * q2.scalar - q1.x * q2.x - q1.y * q2.y - q1.z * q2.z,
q1.scalar * q2.x + q1.x * q2.scalar + q1.y * q2.z - q1.z * q2.y,
q1.scalar * q2.y + q1.y * q2.scalar + q1.z * q2.x - q1.x * q2.z,
q1.scalar * q2.z + q1.z * q2.scalar + q1.x * q2.y - q1.y * q2.x);
}
function eulerToQuaternionXYZ(x, y, z) {
var quatX = angleAxisToQuat(x, Qt.vector3d(1, 0, 0));
var quatY = angleAxisToQuat(y, Qt.vector3d(0, 1, 0));
var quatZ = angleAxisToQuat(z, Qt.vector3d(0, 0, 1));
return multiplyQuaternion(multiplyQuaternion(quatX, quatY), quatZ)
}
function slerp(start, end, t) {
var halfCosTheta = ((start.x * end.x) + (start.y * end.y)) + ((start.z * end.z) + (start.scalar * end.scalar));
if (halfCosTheta < 0.0)
{
end.scalar = -end.scalar
end.x = -end.x
end.y = -end.y
end.z = -end.z
halfCosTheta = -halfCosTheta;
}
if (Math.abs(halfCosTheta) > 0.999999)
{
return Qt.quaternion(start.scalar + (t * (end.scalar - start.scalar)),
start.x + (t * (end.x - start.x )),
start.y + (t * (end.y - start.y )),
start.z + (t * (end.z - start.z )));
}
var halfTheta = Math.acos(halfCosTheta);
var s1 = Math.sin((1.0 - t) * halfTheta);
var s2 = Math.sin(t * halfTheta);
var s3 = 1.0 / Math.sin(halfTheta);
return Qt.quaternion((s1 * start.scalar + s2 * end.scalar) * s3,
(s1 * start.x + s2 * end.x ) * s3,
(s1 * start.y + s2 * end.y ) * s3,
(s1 * start.z + s2 * end.z ) * s3);
}
function getAxis(quat) {
var tmp1 = 1.0 - quat.scalar * quat.scalar;
if (tmp1 <= 0) return Qt.vector3d(0.0, 0.0, 1.0);
var tmp2 = 1 / Math.sqrt(tmp1);
return Qt.vector3d(quat.x * tmp2, quat.y * tmp2, quat.z * tmp2);
}
function getAngle(quat) {
return Math.acos(quat.scalar) * 2.0 * 180.0 / Math.PI;
}
width: 200; height: 200
Rectangle {
width: 100; height: 100
anchors.centerIn: parent
color: "#00FF00"
Rectangle {
color: "#FF0000"
width: 10; height: 10
anchors.top: parent.top
anchors.right: parent.right
}
transform: Rotation {
id: rot
origin.x: 50; origin.y: 50
axis: getAxis(animator.result)
angle: getAngle(animator.result)
}
}
NumberAnimation
{
property quaternion start: eulerToQuaternionXYZ(-45, -60, 0)
property quaternion end: eulerToQuaternionXYZ(-45, -60, 180)
property quaternion result: slerp(start, end, progress)
property real progress: 0
id: animator
target: animator
property: "progress"
from: 0.0
to: 1.0
duration: 4000
running: true
}
}
You are trying to this in wrong way. You can combine transformations and animate one of it. This way you will achieve exactly what you need.
Another problem I see is that you are writing about degrees and in code I see radians :).
Bottom line this should look like this:
Rectangle {
width: 100; height: 100
anchors.centerIn: parent
color: "#00FF00"
Rectangle {
color: "#FF0000"
width: 10; height: 10
anchors.top: parent.top
anchors.right: parent.right
}
transform: [
Rotation {
id: zRot
origin.x: 50; origin.y: 50;
angle: 0
},
Rotation {
id: xRot
origin.x: 50; origin.y: 50;
angle: 45
axis { x: 1; y: 0; z: 0 }
},
Rotation {
id: yRot
origin.x: 50; origin.y: 50;
angle: 60
axis { x: 0; y: 1; z: 0 }
}
]
NumberAnimation {
running: true
loops: 100
target: zRot;
property: "angle";
from: 0; to: 360;
duration: 4000;
}
}
Result is different from this one on your pictures, but this is result you've messed up degrees and radians. I used transformation described in text, not from your code.