Solve the challenge of overlapping points in Matplotlib/Pyplot - python-2.7

I want to plot two class of points with python with different colors , but when I plot them with scatter those points which overlap other ones obscure them.
What I mean is I want to show both of the points (e.g green & purple) in one fixed point.
My Code :
plt.scatter(range(1,len(tp_labels)+1),[x[1] for x in tp_labels ],color = 'purple')
plt.scatter(range(1,len(tp_labels)+1),[x[2] for x in tp_labels ],color = 'green')
As you can see the green overwrites the purple on overlapped points.
I would appreciate your solutions.

I think what you need is to tinker with your alpha settings in order to develop the amount of transparency that best suits your particular circumstances. In matplotlib/pyplot the alpha setting determines the transparency of the item you are plotting. Its values can range as follows (0.0 transparent through 1.0 opaque).
Check out the documentation here and you'll see that you have the option to adjust the alpha setting for almost anything in matplotlib/pyplot up to and including the legends and even the plot background. This should solve your issue regarding subsequent plots obliterating earlier ones.

Related

Getting Hue value from the peaks in histogram opencv

I am trying to use color information of detection of rectangles. Some of my rectangles are overlapping and with multicolor. I found a solution to detect these rectangles using Hue values. I am checking inRange with Hue values of colors
Orange 0-22
Yellow 22- 38
Green 38-75
Blue 75-130
Violet 130-160
Red 160-179
, but I do not know what exact color is going to be. For example, in one image rectangles can be orange, red, blue and in another image, it can be other colors.
I tried to look histogram, but I would have a background which is not only white or black. So, the histogram is confusing.
If you give me some ideas about how to handle this problem, I will appreciate it.
You can try a brute force approach, where you try all the color ranges, then use findcontours (example) to see if you can find a contour that is possibly a rectangle. If the background is very noisy you can use a minimum size for the contour
(contourArea). You could also check the solidity by dividing the contour area by the area of the minAreaRect, the result for a rectangle (that has good detection) should approach 1.
Whether this could possibly work depends on several factors, and overlapping rectangles will quickly break it.
So if I understand correctly, you have a variety of images, each of which contain multiple rectangles which can be a variety of different colors, and the background of the image is non uniform, and you're trying to segment out the rectangles using a histogram?
Using histograms for image segmentation works best with grey scale images with a uniform background, so that upon seeing the peeks in your histogram you know the primary intensities of the objects you are trying to segment out. This method is not going to translate well to your application because the shapes you are attempting to segment are non uniform in shade, without seeing example images I would probably say this isn't going to work, however you might be able to get away with it if the shade variation of the rectangles is relatively similar... basically if you have rectangles that are 15-30 you might be alright, but if they vary from 20-100 you're going to be out of luck, same goes with variation of the background.
If the background and the rectangles have very clearly defined borders, and the background colors transition VERY smoothly, you may be able to get away with some sort of region growing on the background in order to get a list of all the background pixels and then just set those to black or something in order to allow better analysis of the rectangles in the foreground, but I can only speculate so much with the information you've given in your post

Inkscape: enlarge figure without creating distortions

(a) what I have, (b) what I get, (c) what I want
I have a simple vector graphic in Inkscape, which consists of a rectangle, filled points and stars. Since the axis ranges are not really nice (the height equals approximatly 3 times the width of the picture) for a publication, I want to rescale the picture. However, I do not have the raw data, such that I can plot it again. How can I rescale my graphic (see figure (a)), such that the x-range is more wide (see figure (c)) without getting distortions (see figure (b))? In the end I want to create a PDF file out of it.
Any ideas on that?
Thanks for your help.
You can try to do it in 2 steps, using the Object -> Transform tool (Shift-Ctrl-M).
First, select everything, and with the transform tool select the Scale tab, and scale horizontally by, say, 300%. All figures will be distorted.
Now, unselect the rectangle, and scale horizontally again by 33.3%, but first click on Apply to each object separately. This will undo the distortion (but not the translation) of each object.
Note that 300% followed by 33.3% should leave the individual objects with the same size.
Documentation here.

Using OpenGL together with Qt Data Visualization

I'm trying to render a 3D bar graph using the Data Visualization library of Qt. The application I want to develop requires that bars that are in different ranges must be colored with a different color. To make things more concrete
1) Yellow if value <=5000
2) Red if value between 5001 and 15000
3) Blue if value above 15000
However Qt's libraries do not allow me to color the bars with different colors. The class QBar3DSeries has three different options for the color style.
1) Q3DTheme::ColorStyleUniform : All bars are colored with the same color. This is out of the question.
2) Q3DTheme::ColorStyleObjectGradient : All bars are colored with the same gradient. Again this is out of the question.
3) Q3DTheme::ColorStyleRangeGradient : This could be a temporary solution. The bars are colored according to the ratio of the value of an individual bar and the value of the highest bar. But here, the bars are displayed in gradients and more than one color is used and I just want one color for each bar. And it is based on the relationship to the largest value, not on the values I want to specify. (In this example 5000, 15000 and 20000)
Maybe I will need other methods to intervene in the process by which the system renders the graph. Can I use OpenGL to do this? (Which means a lot of work, I don't know much about OpenGL)
Any help is appreciated. Thanks.

Color normalization based on known objects

I was unable to find literature on this.
The question is that given some photograph with a well known object within it - say something that was printed for this purpose, how well does the approach work to use that object to infer lighting conditions as a method of color profile calibration.
For instance, say we print out the peace flag rainbow and then take a photo of it in various lighting conditions with a consumer-grade flagship smartphone camera (say, iphone 6, nexus 6) the underlying question is whether using known references within the image is a potentially good technique in calibrating the colors throughout the image
There's of course a number of issues regarding variance of lighting conditions in different regions of the photograph along with what wavelengths the device is capable from differentiating in even the best circumstances --- but let's set them aside.
Has anyone worked with this technique or seen literature regarding it, and if so, can you point me in the direction of some findings.
Thanks.
I am not sure if this is a standard technique, however one simple way to calibrate your color channels would be to learn a regression model (for each pixel) between the colors that are present in the region and their actual colors. If you have some shots of known images, you should have sufficient data to learn the transformation model using a neural network (or a simpler model like linear regression if you like, but a NN would be able to capture multi-modal mappings). You can even do a patch based regression using a NN on small patches (say 8x8, or 16x16) if you need to learn some spatial dependencies between intensities.
This should be possible, but you should pay attention to the way your known object reacts to light. Ideally it should be non-glossy, have identical colours when pictured from an angle, be totally non-transparent, and reflect all wavelengths outside the visible spectrum to which your sensor is sensitive (IR, UV, no filter is perfect) uniformly across all different coloured regions. Emphasis added because this last one is very important and very hard to get right.
However, the main issue you have with a coloured known object is: What are the actual colours of the different regions in RGB(*)? So in this way you can determine the effect of different lighting conditions between each other, but never relative to some ground truth.
The solution: use a uniformly white, non-reflective, intransparant surface: A sufficiently thick sheet of white paper should do just fine. Take a non-overexposed photograph of the sheet in your scene, and you know:
R, G and B should be close to equal
R, G and B should be nearly 255.
From those two facts and the R, G and B values you actually get from the sheet you can determine any shift in colour and brightness in your scene. Assume that black is still black (usually a reasonable assumption) and use linear interpolation to determine the shift experienced by pixels coloured somewhere between 0 and 255 on any of the axed.
(*) or other colourspace of your choice.

Getting and comparing object's color from image

My aim is to determine the color of object. And make a classification, for example some blue, little bit dark blue or light blue can be classified to one type - Blue. I have some template objects images. There are many of them. What I want is to group this images manually. For example some objects have blue colored text, but some areas of yellow etc. By some algorithm at first I group them manually, and then each group should be analyzed by computer to make some feature extraction. And then while getting from camera as video or image of the random selected object, I want to identify it's group correctly. How can I do it? Which features should be extracted and how can they be compared? I was thinking of histogram of a Hue plane in HSV. But don't know what features to get from that histogram and then to compare it with another(from template images)
EDIT 1: Example of images that should be classified, later will post more if neccessary.
image example
It is always good to use the LAB color space in order to mimic human perception.
http://en.wikipedia.org/wiki/Lab_color_space
That is because the Euclidean metric in this color space represents the perceptual distance between colors, that is, how close they are.
You should cluster by A,B and ignore L value which is the brightness.
HSV can be tricky to use in varying light situations. This is especially true outside, where shadows are a lot bluer than sunlight areas.
Ideally, you can use the hue and saturation components and ignore the value component. This would make the distance between a light blue and a dark blue very small:
dist = sqrt((h1 - h2)^2 + (s1 - s2)^2
The gotcha is that hue is actually a continuous scale (like an angle). The difference between 255 and 0 should only be 1.