According to the Raphael documentation, there are two different entities Paper and Raphael. What is the difference between the two? Is it that "Paper" actually means "Raphael canvas" and "Raphael" refers to methods that can be done when the Raphael package is installed in general?
As far as I understand it you are spot on. Paper is referring to a particular element that Raphael will render to. Any Raphael methods are essentially static methods: they can be called without creating a paper instance. You could have several Papers on any given web page, but you'll only have one Raphael.
A simple way to think about them may be that a Paper is a canvas, and Raphael is the library itself that has some static methods.
Related
I have some 3d objects in a simple Qt3D scene with a camera. It is set up in a few minutes in Qt3D using C++. What is the best way to do collision detection? I am not asking how to do collision detection, but what the best way is to do it in Qt3D.
I changed the question from the best way - to intended way ! This is an entity component system, and the question is related to the architecture of Qt3D that is not that simple to understand in terms of where to add this. KDABs talk about an intended way, ... what is that way??
Added may 15:
I am using C++, not QML.
From comments and answers, it seems to be clear that the intended way is to add collision detection in aspects.
If this is correct, then how can you for example maintain a list of colliding objects in aspects ?? seems impossible to me... hope someone have an idea.
**** Added may 23 ****
Collision detection was included in Qt3D before, many things was taken out, and Qt3D was redesigned. There are many problems with lacking documentation as a result. Please do not reply to this if you do not really know the inner workings of Qt3D, I am not looking for the short answers you can google yourself to, but the deeper understanding, due to the problem with lacking documentation of the new way things work from KDAB.
Qt3D uses an Entity Component System (ECS), and typically in such systems entities are collections of components, and there exist systems that operate over entities based on their contents. Qt3D is no different, your QML scripts define entities and associates components to them, and Qt3D's predefined systems operate over and manipulate them to create the intended user interface experience, this is where most tutorials for Qt3D end, some will continue to mention that there are ways to build on top of the Qt3D.
Qt3D's predefined systems can be complemented with new systems, in Qt3D these systems are called Aspects. To extend Qt3D with new Aspects, for example physics simulation, you have to define a new Aspect that wraps the implementation of the system, and custom components that encapsulate whatever data necessary for the Aspect to operate that can't otherwise be associated with the entity. Once you have this all defined, and registered into the Qt3D context, you just have to populate your entities with the components the system requires and the Aspect will be called to operate over them, causing the implementation to operate over and manipulate them.
Now the remaining question is how to implement a physics (or any entity pairing based) system in an ECS, which has been covered ubiquitously in ECS game designing discussions. Below I've provided slides and blogs from KDAB providing a high velocity crash course in the Qt3D architecture and reasoning behind it, and a tutorial explaining the ins and outs of extending Qt3D with a custom aspect. Good Luck!
Slides:
https://www.qtdeveloperdays.com/sites/default/files/qt3d-in-depth.pdf
Detail:
https://www.kdab.com/overview-qt3d-2-0-part-1/
https://www.kdab.com/overview-qt3d-2-0-part-2/
Tutorial:
https://www.kdab.com/writing-custom-qt-3d-aspect/
https://www.kdab.com/writing-custom-qt-3d-aspect-part-2/
Addendum: Unfortunately, there appears to be a missing 3rd part :(
Link for UI
My requirement is to make compatible this chart with lower versions of internet explorers.
Link for the ui is attached.
That's presumably done in d3--looks like this example.
There are two parts to that visualization: calculating the size and position of the circles and actually drawing them. It's very possible that the first part is compatible with IE, or could be made to be so by extending any Array functions that don't make it back that far.
You could take over from there and draw the circle in Raphael, which does work at least back to IE7. I believe that's the idea behind d34raphael, a fork of d3. You could try that or do it yourself.
Say there are 3 boxes on the screen, how can I go about touching one of them to pick it up and "throw" it at the others? I have the rest of the world implemented but can't find much information on how to grab/drag/toss physics objects. Any sample code or documentation out there that would help with this?
It depends what you are attempting to do. It is a physics simulation and as such a typical way of interacting with the system is by applying forces to objects opposed to direct manipulation of the x,y coordinates. But you can in fact do either. I believe the most common approach is to use a mouse joint. A google search on b2MouseJoint will show the documentation and several examples including this one.
http://muhammedalee.wordpress.com/tag/b2mousejoint/
For an experiment, we are looking for a way to automatically compare two hand (mouse-)drawn images. These images are, for instance, drawn on an HTML5 canvas element and we need some way to see whether the pictures roughly match.
So, if someone draws a house, we need to test whether the second drawing looks like the first house. It doesn't matter what exactly is in the image, but we only need to know whether the two images look alike. I.e., we want to know whether the person drawing the picture, can redraw roughly the same picture. The exact orientations of the lines, the size of the image or the position of the picture on the canvas shouldn't matter.
Is there, by any chance, a library or project that does this?
Yes absolutely
With API provided by Face.com and with imagemagick Both are providing
api's
http://www.imagemagick.org/api/compare.php
FaceAPI - Track Faces from a Webcam: http://faceapi.com
it might be useful for your requirement
Ultimately, we didn't find a library that does exactly the same, so we have modified the original project's goals and have crowdsourced the comparison that needed to be done.
You can probably also use Mechanical Turk or something similar for the time being, depending on how much you wish to pay for it.
I think researchers are currently be working on technical solutions to the issue described above.
I recently saw the virtual mirror concept on you tube, I tried it out and researched about it. It seems that the creators have used augmented reality so that people can see the output on their screens. On researching I found out that we identify a pattern on which a 3D image is superimposed.
Question 1:How are they able to superimpose the jewellery and track the face of the person without identifying any pattern?
I also tried to check various libraries that I can use to make a program similar to the one they show. Seems to me that a lot of people are using Android phones and iPhones and making apps that use augmented reality.
Question 2:Is there any way that I can use c++ and try to make a program that uses augmented reality?
Oh, and the most important thing, the link to the application is provided below:
http://www.boutiqueaccessories.com.au/virtual-mirror/w1/i1001664/
Do try it out. Its a good experience. :D
I'm not able to actually try the live demo, but the linked video suggests that they either use some simplified pattern recognition (get the person's outline), or they simply track you based on the initial image (with your position/texture being determined by the outline being shown.
Following the video, it's easy to see that there's no real/advanced AR behind this. The images are simply overlayed or hidden (e.g. in case it's missing track of one ear due to you looking to the side) and they're not transformed (no perspective or resizing happening). They definitely seem to track the head (or features like ears, neck, etc.). depending on your background and surroundings that's actually a rather trivial task.
Question 2: Sure! There are lots of premade toolsets out there, but you could as well use some general image processing library such as OpenCV to do the math. Augmented reality usually uses some kind of pattern (e.g. a card or page with a known pattern) to determine the correct position and transformation for the contents to be added to the image. There are also approaches using the device's orientation and perspective changes in camera images to determine depth/position (I really like this demo).