Link for UI
My requirement is to make compatible this chart with lower versions of internet explorers.
Link for the ui is attached.
That's presumably done in d3--looks like this example.
There are two parts to that visualization: calculating the size and position of the circles and actually drawing them. It's very possible that the first part is compatible with IE, or could be made to be so by extending any Array functions that don't make it back that far.
You could take over from there and draw the circle in Raphael, which does work at least back to IE7. I believe that's the idea behind d34raphael, a fork of d3. You could try that or do it yourself.
Related
I'm currently learning OpenCV for a project I recently started in, and need to detect 3D boxes (imagine the big plastic boxes maybe 3ft x 2ft x 2ft) in an image. I've used the inRange method to create an image which just had the boxes I'd like to detect in it, but I'm not sure where to go from there. I'd like to get a 3D representation of these boxes back from OpenCV, but I can't figure out how. I've found quite a few tutorials explaining how to do this with just one object (which I have done successfully), but I don't know how I would make this work with multiple boxes in one image.
Thanks!
If you have established a method that works well with one object, you may just go with a divide-and-conquer approach: split your problem into several small ones by dividing your image with multiple boxes into an several images with one object.
Apply an Object Detector to your image. This Tutorial on Object Detection may help you. A quick search for object detection with OpenCV also gave this.
Determine the bounding boxes of the objects (min/max of the x and y-coordinates, maybe add some border margin)
Crop bounding boxes to get single object images
Apply your already working method to the set of single object images
In case of overlap, the cropped images may need some processing to isolate a "main" object. Whether 4. works is then dependent on how robust your method is to occlusions.
I stumbled over your question when looking for object detection. It's been quite a while since you asked, but since this is a public knowledge base a discussion on this topic might still be helpful for others.
Say there are 3 boxes on the screen, how can I go about touching one of them to pick it up and "throw" it at the others? I have the rest of the world implemented but can't find much information on how to grab/drag/toss physics objects. Any sample code or documentation out there that would help with this?
It depends what you are attempting to do. It is a physics simulation and as such a typical way of interacting with the system is by applying forces to objects opposed to direct manipulation of the x,y coordinates. But you can in fact do either. I believe the most common approach is to use a mouse joint. A google search on b2MouseJoint will show the documentation and several examples including this one.
http://muhammedalee.wordpress.com/tag/b2mousejoint/
For an experiment, we are looking for a way to automatically compare two hand (mouse-)drawn images. These images are, for instance, drawn on an HTML5 canvas element and we need some way to see whether the pictures roughly match.
So, if someone draws a house, we need to test whether the second drawing looks like the first house. It doesn't matter what exactly is in the image, but we only need to know whether the two images look alike. I.e., we want to know whether the person drawing the picture, can redraw roughly the same picture. The exact orientations of the lines, the size of the image or the position of the picture on the canvas shouldn't matter.
Is there, by any chance, a library or project that does this?
Yes absolutely
With API provided by Face.com and with imagemagick Both are providing
api's
http://www.imagemagick.org/api/compare.php
FaceAPI - Track Faces from a Webcam: http://faceapi.com
it might be useful for your requirement
Ultimately, we didn't find a library that does exactly the same, so we have modified the original project's goals and have crowdsourced the comparison that needed to be done.
You can probably also use Mechanical Turk or something similar for the time being, depending on how much you wish to pay for it.
I think researchers are currently be working on technical solutions to the issue described above.
I am trying to 'squeeze' my textures for walking animations. The anim has 8 frames, but actually can be done quite well with 1-2-3-4-5-4-3-2 which would fit nicely in a 128x128 points texture. Do you know of a tool that can create the plist entries for 6-7-8 that are mapped onto the 4-3-2 areas of the texture ?
Coding is still an option, but was wondering if some tool has that out of the box.
I'm surprised there's still Cocos2D developers out there who aren't using TexturePacker. :)
Check out the Alias Creation section under Features, I'm quoting (but also can confirm that this works perfectly):
If two images are identical after trimming only one image is placed in
the sprite sheet. The duplicates will just be added to the description
file allowing you to access it with both names.
This is perfect when using animations: You simply don't have to care about equal phases.
I recently saw the virtual mirror concept on you tube, I tried it out and researched about it. It seems that the creators have used augmented reality so that people can see the output on their screens. On researching I found out that we identify a pattern on which a 3D image is superimposed.
Question 1:How are they able to superimpose the jewellery and track the face of the person without identifying any pattern?
I also tried to check various libraries that I can use to make a program similar to the one they show. Seems to me that a lot of people are using Android phones and iPhones and making apps that use augmented reality.
Question 2:Is there any way that I can use c++ and try to make a program that uses augmented reality?
Oh, and the most important thing, the link to the application is provided below:
http://www.boutiqueaccessories.com.au/virtual-mirror/w1/i1001664/
Do try it out. Its a good experience. :D
I'm not able to actually try the live demo, but the linked video suggests that they either use some simplified pattern recognition (get the person's outline), or they simply track you based on the initial image (with your position/texture being determined by the outline being shown.
Following the video, it's easy to see that there's no real/advanced AR behind this. The images are simply overlayed or hidden (e.g. in case it's missing track of one ear due to you looking to the side) and they're not transformed (no perspective or resizing happening). They definitely seem to track the head (or features like ears, neck, etc.). depending on your background and surroundings that's actually a rather trivial task.
Question 2: Sure! There are lots of premade toolsets out there, but you could as well use some general image processing library such as OpenCV to do the math. Augmented reality usually uses some kind of pattern (e.g. a card or page with a known pattern) to determine the correct position and transformation for the contents to be added to the image. There are also approaches using the device's orientation and perspective changes in camera images to determine depth/position (I really like this demo).