Plotting/finding perimeter of data on a scatter plot - list

Regarding the included graph (It's been ListLinePlotted to show the data sets more clearly),
1: How would I find and or plot the perimeter of each data set; ideally in List form, so it will scale when I plot it using LisLogPlot alongside the original data. (Similar to FindCurvePath, but for a non-round shape)
2: How would I fill the entire area encompassed by the data set on the ListPlot. i.e. the resulting graph would have four block color areas in the shape of each region.
Essentially I'm just trying plot graphs which clearly show the different regions. If there are better ways then I'd be open to suggestions!
P.S. the regions will never intersect for this particular plot.

Related

How to overlap two label to make mask for segmentation in ITK-snap?

I'm annotating CT scan slices(Nifti format) with ITK-snap. One slice contains two labels(Subarachnoid and Intraparenchymal) in the same area. Here is the original annotated image link: https://ibb.co/FJpyVZF
Since two labels are overlapping, the intersection area in the slice should contain both labels. But it shows it only contains the label which has been drawn last. Since the Subarachnoid area was drawn last over the Intraparenchymal area, the final segmented image only shows it contains Subarachnoid in the intersection region. I'm attaching the annotated slice https://ibb.co/F3TrXtq and segmented slice https://ibb.co/sRgdndY to clear my point.
What can I do to make the intersection area contain two labels?
ITK-SNAP uses binary label maps. That approach does not allow label overlap. Your options are:
Use a different label map for each structure you are segmenting.
Use a different segmentation representation. This will require use of different software. I recommend 3D Slicer.

Using shapefile polygons in Spotfire to select points

I have a map in Spotfire with several different polygons overlaying it. These polygons are all stored in a shapefile and loaded on the map as a feature layer. Also on the map are several xy points with data associated to them. I would like to be able to select a polygon and in turn, that polygon select all the xy points inside of it.
Thanks,
-Andrew Pruet

How compare two edges images in opencv (not matchShapes)

A little introduction on what I'm doing ...
For academic purposes I am creating an application in c++ using opencv for the detection of static objects in a scene.
The application is based on a combined approach of background subtraction and tracking, and the detection of events related to the abandonment of the objects works fine.
But at the moment I have a problem that I can't solve; I have to implement a finite state machine for detect the event of object removal, both before and after the entry of the object in the background.
To do this I was ordered by my superiors to use the edges of objects.
And now the problem.
After detecting a vehicle illegally parked along a road, I need to compare the edges of various images (the background captured at the time of the alarm, the current background, the current frame) to understand what the vehicle do (picks up the movement, remains parked or picks up the movement after being in the background).
I run these comparisons on the region of the scene in which there is the vehicle (vehicles typically have different size), I pull the edges using canny algorithm by obtaining a binarized CV_8UC1 cv::Mat.
At this point I have to compare them.
I tried to detect the contours with findContours and compare them with matchShapes, but it does not seem the right way, I'd compare each contour of the first image with every contour of the second, in addition typically the two images to campare have different number of contour (for example original background and current background, because the edges of the current background increased with the entry of the vehicle in the background).
I also tried to create a new image in which each pixel corresponds to the absolute difference of the other two, then I counted the white pixels of the difference image (wPx), and I used this number for comparison in this way: I set two thresholds (thr1 and thr2), and counted the pixels of the bounding rect of the vehicle (perim), if wPxthr2*perim images are different.
(I set percentages thresholds and I moltipy them with the perimeter of the bounding box to adapt the thresholds to the vehicle dimensions.)
This solution, however, seems to be very little robust.
Do you have something simple to suggest me?
Thank you very much in any case, more than once you StackOverflow users have helped me!
PS: THIS is an example of the images that I have to compare
The first is the background without the vehicle stationary, contains the edges of the street;
the second is the original background, the one captured when the stationary vehicle is detected;
the third is the current background (which in this case is equal to the original being the same frame, but then change);
the fourth is the current frame of the video;
You may want to take a look at this paper: A Novel SIFT-Like-Based Approach
for FIR-VS Images Registration. Aguilera et al. propose an Edge Oriented Histogram descriptor (EOH-SIFT).
This paper intends to register multispectral images, visible and infrared image, to each other. Because of the different characteristics of the images, the authors first extract edges/contours in both images, which results in images similiar to yours.
So, you can describe your image patches using this descriptor, illustrated in the following figure (taken from the above paper):
Subdivide your image patch into 4x4 zones
For each of the 16 subregions compose a histogram of contour's orientation (5 bins)
Put the histograms together into one descriptor vector of size 16x5=80 bins
Normalize the feature vector
So, every image you want to compare (in your case 4) is described by its 80-dimensional feature vector. You can compare them to each other by calculating and evaluating the Euclidean distance between them.
Note: Here a patch of size 80x80 or 100x100 (NxN) pixels is suggested. You may have to adjust the sizes to your image sizes.

Finding all the regions in a webpage's image

I am working on a project where I need to find different regions present in an image(for any web page) like - navigation bar, menu bar, body, advertisement section etc. First I want to segment my entire image into distinct regions/sections using Image processing.
What I have done:
1st approach: I ran edge detection algorithm(Canny), this way I could see different regions in the form of rectangular boxes. However, I couldn't find a way to recognize all these regions.
2nd approach: I dealt with Hough transform to get all the horizontal and vertical lines which can help me in deciding different rectangular sections in the image. However, I am not able to come up with some concrete approach to use this houghlines to find all the rectangular regions imbibed in the image.
Any kind of your help is highly appreciated!

How can I get Google Charts to display multiple colors in a scatter chart?

I would like to display multiple colors (and potentially shapes and sizes) of data points in a Google Chart scatter chart. Does anyone have an example of how to do so?
I answered my own question after waiting SECONDS for an answer here :-)
You can indeed have different colors for different data elements. For example:
http://chart.apis.google.com/chart?chs=300x200&cht=s&chd=t:1,2,3|6,5,4&chds=1,3,0,10&chxt=x,y&chxl=0:|0|1|2|1:|0|10&chm=d,ff0000,0,0,8,0|a,ff8080,0,1,42,0|c,ffff00,0,2,16,0
It's the chm= that does the magic. I was trying to have multiple chm= statements. You need to have just one, but with multiple descriptions separated by vertical bars.
You can only use one dataset in a scatter plot, thus only one color.
http://code.google.com/apis/chart/#scatter_plot
From the API description:
Scatter plots use multiple data sets differently than other chart types. You can only show one data set in a scatter plot.
You could effectively fake a multi-color scatter plot by using a line plot with white lines and colored shape markers at the points you want to display.
Here's another example: twitter charts. I'm hoping to do the same thing. Need to find out how to do the concentric circles.