Rotating a 3D mesh in direct x - c++

I have inhereted a Direct x project which I am trying to improve. The problem I am having is that I have 2 meshes and I want to move one independent of the other. at the moment I can manipulate the world matrix simply enough, but I am unable to rotate an indervidual mesh.
V( g_MeshLeftWing.Create( pd3dDevice, L"Media\\Wing\\Wing.sdkmesh", true));
loades the mesh and later it is rendered
renderMesh(pd3dDevice, &g_MeshLeftWing );
Is there a way I can rotate the mesh. I tried transforming it using a matirx with no success?
g_MeshLeftWing.TransformMesh(&matLeftWingWorld,0);
any help would be great

Firstly, you appear to be loading a ".sdkmesh" file. It was documented heavily in the DirectX SDK that ".sdkmesh" was made for the SDK and should not be used as an actual mesh loading/drawing solution.
Therefore I would advise you start looking at alternative means to load and draw your model, not only will that give you a greater understanding of DirectX, but it should ultimately answer your question in the long run!

Related

Raycasting render 2d chest CT scans to 3d using OPENGL/C++

I am going to split this question in 3 parts
First, I've been given this problem, and I don't know where to start, if you have been solving related problem, would you give me some hints and keywords to help me do some more research?
I have done some research on my own
So here is some 2D chest CT scans (sorry due to reputation rule i can't implement images directly)
All photos are in the same angle. So I think I can simply read each photo to a vector of pixels, do some thresh holding to make all black and black-ish pixels going to be a non-colored pixel. Next, I'll create a vector called vector_of_photo of those vectors. Then the index of each vector in vector_of_photo are now the Z-index.
Now I can render a 3d photo from those vectors of pixels right?
In the second place, I got trouble understand raycasting algorithm,
I think the idea here is, when I already got a box of pixel then everytime I rotate the box, it cast straight-lines from that angle of the camera to the box, each line found a has-colored pixel going to stop casting and render that pixel (or more specific, copy the pixel to the exactly location on the plane).
Did I understand it correctly?
At last, the OPENGL/c++ part is just the option I think I'm going to use to solve this problem. And I'm not pretty sure it is a good idea or not, so give me some more hint about the programming language, library or module I should take a look at.
I happen to be working on the same problem in my spare time. Haha :)
Here is one approach to your problem:
Load the images into your application, such that you get the 3D volumetric dataset that you describe
Remove all points that don't fit within some range of values (e.g. 0.4/1.0 to 0.6/1.0 brightness). You may need to apply preprocessing and filtering.
Fit a mesh to the resulting point cloud with open-source software. Here is a good blog post about that
https://towardsdatascience.com/5-step-guide-to-generate-3d-meshes-from-point-clouds-with-python-36bad397d8ba
Take the resulting mesh (probably, an STL file) and visualize it in any software your want (Blender 3D, Unity 3D, Cinema 4D, a custom OpenGL application), anything really.
My own approach to this problem is very similar to the one you suggest in your question, and I have already made some headway. Therefore, I thought it would be good to suggest another route.
NOTE Please be aware that what you are working on is not a trivial problem. It's a large project, and there are many Commerical companies that put years into doing just this. This is a great project for learning OpenGL, rendering, and other concepts. It's perfectly doable, but you may be looking at several months of work, and lots of trial and error. Good luck!
Its not often that two people would happen to work on the same problem, so if you want to discuss further, feel free to contact me over linkedin and/or post a comment below. www.linkedin.com/in/michael-sohnen-a2454b1b2

Vertex buffer not clearing properly

Context
I'm a beginner in 3D graphics and I'm starting out with Vulkan, which I already know it's not recommended save it please, currently working on a university project to develop the base of a 3D computer graphics engine based on the Vulkan API.
The problem
Example of running the app to render the classic 2D triangle
Drawing a 3D mesh after having drawn the triangle
So as you can see in the images above I want to be able to:
Run the engine.
Choose an object to be drawn.
Close the window.
Choose another object to be drawn.
Open the same window back up with only the last object chosen visible.
And the way I have been doing this is by essentially cleaning up the whole swap chain and recreating it from scratch once the window is closed and a new object has been chosen. Now I'm aware this probably sounds like terrorism for any computer graphics engineer but the reason I'm doing this is because I don't know a better way, I have just finished the vulkan tutorial.
Solutions tried
I have checked that I do a vkDestroyBuffer and vkFreeMemory on the current vertex buffer before recreating it again once I choose a different object.
I have disabled depth testing entirely in case it had something to do with it, it doesn't.
Note: The code is extensive and I really don't have a clue of which part of it could be relevant to the problem, so I opted for not cluttering the question, if there is an specific part you think it might help you find the solution please request it.
Thank you for taking the time to read my question.
A comment by user369070 ended up drawing my attention to the function I use to read OBJ files which made me realize that this function wasn't cleaning a data structure I use to store the vertices of the object chosen to be drawn before passing them to the vertex buffer.
I just had to add vertices = {}; at the top of the function to solve it.

CGAL - Triangulate Mesh, Return Face to Face Mapping?

I have a quad mesh which I want to triangulate, but I also would like to have a mapping which records which quads of the original mesh have become which triangles on the resultant mesh.
Obviously I am aware of the
CGAL::Polygon_mesh_processing::triangulate_faces(mesh);
function. But I was wondering via the NamedParameters if it is possible to return this information?
This functionality is NOT provided, but it would be rather straightforward to add it. In what context do you use it (homework, another open source project, company,...?), what is your time frame, ....
Let me add that I added the functionality in a github PR. It is not necessarily the final version, as it has to go through the CGAL review process first. Maybe I can even get feedback from you.
Have a look at the example where I copy the non-triangle mesh, and while I copy I create a face to face map. This one is then passed to the visitor that updates the map.
It is

Silhouette Detection using old gl

I have been assigned to implement shadows in the project which I am working now. Since we have one light source and our embedded hardware is very old(even not have gpu) we thought stencil buffer implementation of shadow volumes will fit our app best.
As first step I want to implement Silhouette Detection which have been described in the link. The link is very good but uses geometry shader for dot product calculation of neighboring edges' normal with light direction. Since we still use old fixed pipeline I won't be able to use that part of this example.
I wanted to ask if the best way for me doing all this dot products myself or is there a old opengl trick-function call which may help me?

How to create fast and easy scene-independent shadows w/o shaders in OpenGL

Let i have some mesh (for ex. sphere) in the center of room, full of cubes and one light source. How can i make fast and easy shadow-casting in OpenGL, using "standard" (fixed) functions only? Note: the result must contain cube and sphere shadows as well.
If you can generate a silhouette of the sphere then you could use shadow volumes. nVidia hardware has also supported fixed function shadow mapping for a fair while as well.
Shadow volumes have the disadvantage of very high fill rate requirements. Shadow maps can be better but require an extra pass.
If you are projecting on to a single plane it may well be easier to just project the object on to a plane.
There is no fast and easy way. There are lots of differnt techiques, that each have their own pros and cons. You can look at a project I host on github, that uses very simple code to create a shadow, using the shadow volume technique (http://iuiz.github.com/VolumeShadow/). However it is written in Java, but it should not be hard to port it to any other language.
The most important ways to create shadows are the so called "shadow mapping" method, where you render your scene (with the camera at the light source, directed to each shadow casting object) to a texture. And the second technique is the shadow voulume method (made famous with Doom3).
I've found one way using StencilBuffers. Being a little confused for a while, i finally got the idea - whith this the most hard thing would be looping through each light source and projecting all scene objects. This one looks more pretty than texture shadowing and works faster than volumeric shadows. here and here are some resources, which helped me to understand matrix multiplication step (it confused me a bit when i was looking through dino demo). As for me, this method is most easy to understand and use. The only question left to solve is how to calculate multiplication matrix.
Although this method could be changed a bit using textures as shown here.
Thanks everybody! =)