Does anyone know of a graphing library for simple transformation of a point from one coordinate system to another coordinate system which is rotated by a angle and some point transformation and just for 2d?
And any graphing tool for plotting and verifying the same visually?
double[3][3]
GDAL includes pretty much every graphic transform you could ask for, and while it is big and hence takes some time to get used to, it is a great framework to move forward with.
This isn't a library, but it's a blog by someone who does this kind of thing:
http://polymathprogrammer.com/
It's got some good theory if you want to know the "behind the scenes".
The Angtigrain Geometry library contains code that can do this, and you can also go farther and use it for drawing as well, but you don't have to. You should look at the agg::trans_affine class in the agg_trans_affine.h file.
Dave
Related
I am going to split this question in 3 parts
First, I've been given this problem, and I don't know where to start, if you have been solving related problem, would you give me some hints and keywords to help me do some more research?
I have done some research on my own
So here is some 2D chest CT scans (sorry due to reputation rule i can't implement images directly)
All photos are in the same angle. So I think I can simply read each photo to a vector of pixels, do some thresh holding to make all black and black-ish pixels going to be a non-colored pixel. Next, I'll create a vector called vector_of_photo of those vectors. Then the index of each vector in vector_of_photo are now the Z-index.
Now I can render a 3d photo from those vectors of pixels right?
In the second place, I got trouble understand raycasting algorithm,
I think the idea here is, when I already got a box of pixel then everytime I rotate the box, it cast straight-lines from that angle of the camera to the box, each line found a has-colored pixel going to stop casting and render that pixel (or more specific, copy the pixel to the exactly location on the plane).
Did I understand it correctly?
At last, the OPENGL/c++ part is just the option I think I'm going to use to solve this problem. And I'm not pretty sure it is a good idea or not, so give me some more hint about the programming language, library or module I should take a look at.
I happen to be working on the same problem in my spare time. Haha :)
Here is one approach to your problem:
Load the images into your application, such that you get the 3D volumetric dataset that you describe
Remove all points that don't fit within some range of values (e.g. 0.4/1.0 to 0.6/1.0 brightness). You may need to apply preprocessing and filtering.
Fit a mesh to the resulting point cloud with open-source software. Here is a good blog post about that
https://towardsdatascience.com/5-step-guide-to-generate-3d-meshes-from-point-clouds-with-python-36bad397d8ba
Take the resulting mesh (probably, an STL file) and visualize it in any software your want (Blender 3D, Unity 3D, Cinema 4D, a custom OpenGL application), anything really.
My own approach to this problem is very similar to the one you suggest in your question, and I have already made some headway. Therefore, I thought it would be good to suggest another route.
NOTE Please be aware that what you are working on is not a trivial problem. It's a large project, and there are many Commerical companies that put years into doing just this. This is a great project for learning OpenGL, rendering, and other concepts. It's perfectly doable, but you may be looking at several months of work, and lots of trial and error. Good luck!
Its not often that two people would happen to work on the same problem, so if you want to discuss further, feel free to contact me over linkedin and/or post a comment below. www.linkedin.com/in/michael-sohnen-a2454b1b2
Say there are 3 boxes on the screen, how can I go about touching one of them to pick it up and "throw" it at the others? I have the rest of the world implemented but can't find much information on how to grab/drag/toss physics objects. Any sample code or documentation out there that would help with this?
It depends what you are attempting to do. It is a physics simulation and as such a typical way of interacting with the system is by applying forces to objects opposed to direct manipulation of the x,y coordinates. But you can in fact do either. I believe the most common approach is to use a mouse joint. A google search on b2MouseJoint will show the documentation and several examples including this one.
http://muhammedalee.wordpress.com/tag/b2mousejoint/
Im looking for a way to warp an image similar to how the liquify/IWarp tool works in Photoshop/Gimp.
I would like to use it to move a few points on an image to make it look wider than it was originally.
Anyone have any ideas on libraries that could be used to do this? I'm currently using OpenCV in the same project so if theres a way using that it would be easiest but I'm open to anything really
Thanks.
EDIT: Heres an example of what im looking to do http://i.imgur.com/wMOzq.png
All I've done there is pulled a few points out sideways and thats what im looking to do from inside my application
From this search 'image warp operator source c++' i get:
..... Added function 'CImg ::[get_]warp()' that can warp an image using a deformation .... Added function 'CImg ::save_cpp()' allowing to save an image directly as a C/C++ source code. ...
then CImg could do well for you.
OpenCV's remap can accomplish this. You only have to provide x and y displacement maps. I would suggest you can create the displacement map directly if you are clever, and this would be good for brush-stroke manipulation similar to Photoshop's liquify. The mesh warp and sparse point map approach is another option, but essentially computes the displacement map based on interpolation.
You may want to take a look at http://code.google.com/p/imgwarp-opencv/. This library seems to be exactly what you need: image warping based on a sparse grid.
Another option is, of course, generate displacements yourself and use cv::Remap() function of OpenCV.
i use EASYBMP library and i want to know the most effective way to scale , rotate , shear and reflect algorithm. i want the most optimized to do it.
The most effective way to scale, rotate, shear and reflect is to use the power of your graphics card - for example through OpenGL.
If you still want to do bitmap pixel operations yourself, typically you do this using linear algebra. This is not super easy to figure out, so I recommend finding some good study material, for example this book.
Instead of telling you how to use EASYBMP (which looks like crap), I'm going to suggest you use Magick++ instead. It supports BMP and has all the methods you ask for built-in.
Please refer to the Wikipedia link for the transformation matrices
I need code for rotating an image in C++ which functions like imrotate function of matlab.
Please suggest a good link. Or if someone can provide the code for imrotate.
Or at least please explain the algorithm.
Its not a homework. I need this code for my project. And we can use any external library or code.
OpenCV2.0 has several computer vision and image processing tools. Specifically warpAffine (by defining the rotation matrix) will solve your problem with rotating an image.
The 2x3 transformation matrix mentioned in the documentation is as follows:
where θ is the angle of rotation and tx and ty is the translation along the x and y axes respectively.
You can get the source code here.
Also, OpenCV2.0 has many MATLAB-esque functions like imread, etc.
Magick can help you. Read this PDF and search for rotate.
Check this hope it helps .
Other questions on stack overflow on the same topic experts opinion on it.
libgd has image rotation functions.
There is no built-in way of accomplishing this in C++, short of writing your own function for manipulating binary data, which yields other problems like "How do I decompress a jpg/png in C++?"
Your best bet is a 3rd party graphics library such as libSDL