Extract region from a Curvilinear satellite Dataset - python-2.7

I have satellite swath data from MODIS and need to extract a subset (region) of data to analyze (NOT PLOT). I am trying to find the best way to do this with out loops which can be slow. In the past I have used set.intersect but this does not work on 2D data.
My issue is both Lat and Lon are 2D and I need to find the indices where my conditions are met (lat>=x1)&(lat<=x2) and similar for lon. and then use those 2D indices to slice my main data set (Aerosol Optical Depth)
Latitude Sample
Longitude Sample
Aerosol MetaData
Code so Far
Normally (for 1D lat/lon) I would used Opt_Depth_Land[:,goodlat,goodlon] to extract my data but this does not work for this type of data set.
Any Help would be greatly appreciated.

valid_lat=(lat>=user_lat-radius)&(lat<=user_lat+radius)
valid_lon=(lon>=user_lon-radius)&(lon<=user_lon+radius)
Valid_Coord=np.where((valid_lat==True)&(valid_lon==True))

Related

How to visualize vtk data set with only point data

I have a vtk dataset with only point data but no cell data. I would like to visualize the dataset as point cloud. When I read the dataset as polyData it wouldn’t show anything on the screen. One work around I did was first writing the dataset as .xyz files, then read from the .xyz files and visualize. Another way I came up with is inserting the points as vertices manually.
Are there any neater ways to achieve this goal?
Thank you very much!
You may check out the vtkVertexGlyphFilter class:
https://vtk.org/doc/nightly/html/classvtkVertexGlyphFilter.html

Why 100x100 images form a 10'000-dimension space?

While reading a paper, I came through that when viewed as vectors of pixel values, face images are extremely high-dimensional. For example, 100x100 images form a 10'000-dimension space.
How is that possible, I don't seem to understand it.
A vector has only one dimension so if you convert a 2D array into 1D known as Flatten in terms of Neural Networks, the result you'll get would be a vector of 100*100 = 10000 values in one dimension. So, basically, you are accumulating a 2D quantity into 1D.
If you need more info on this topic, you can understand the concept of Flatten from YouTube, it will help you get a pictorial understanding of the concept.
Hope this would help clear your doubt.

Reading the Number Of Slice from DICOM Header of a MRI 2D Multi-slice

I am working on reading a MRI 2D multi-slice, and looking for the number of slice it has.
But unfortunately, there is no slice count in the DICOM header. I would like to ask why and how can I get the slice count rather than just reading the DICOM header directly. Can I calculate the slice count from any physical value of the slice?
I have SiemensTag0029_1020.
Thanks in advance.
Are you dealing with newer multi-frame Enhanced MR Image (1.2.840.10008.5.1.4.1.1.4.1) or older single frame MR Image (1.2.840.10008.5.1.4.1.1.4)? With multi-frame DICOM file, you can look-up Number Of Frames (0028,0008) tags.
Since you are dealing with MR Image Storage instances, you can simply order all the instances according to there IPP (Image Position Patient) and IOP (Image Orientation Patient) attributes. There is a well known algorithm that compute the distances along the normal of each instances and order those accordingly. It has proven to be very reliable. See for example: gdcm::IPPSorter

how to query the database to return all zip codes with a given distance (ie 5 miles) from a given zip code using geopy

Hi frens I am using geopy to calculate the latitude and longitude. Now I want to get the list of areas given distance from a zipcode.How to get that?
Well, as I can see, geopy doesn't have any built-in capability to get a list of areas around some coordinates.
But you can use a workaround. Take your geocode and calculate coordinates (latitue and longitude). Then imagine a grid on the map with a cell size equal to area of the smallest one you need to find around your location.
Use geopy to get an area name belonging to the each cell corner of your grid. Is that ok for you? It will get you some kind of approximation because a grid is not a circle and you may miss some small areas. But I think in most cases the solution will work fine.
It is much easier to locate zipcodes inside a rectangle than in a circle so I would recommend that you approximate your problem by looking for zipcodes inside a given rectangle.
Here are answers to the question of how to get list of zipcodes in given polygone: Find zipcodes inside polygon shape using google maps api
Summary
You need geometry for each zipcode. Once you have that you need to be able to query it using database that supports geoquery. One such database is Google's Fusion Table and there is already a geometry data table for zipcodes available here: https://www.google.com/fusiontables/DataSource?docid=1AxB511DayCdtmyBuYOIPHIe_WM9iG87Q3jKh6EQ#rows:id=1
Here's the sample query for Fusion Table data.
Another approach is server side code using PHP and CSV data. Here's live demo: http://daim.snm.ku.dk/demo/zip/. The page also has download for code.
If you use any of above technique please make sure to upvote answers of original authors :).

Analyzing gaze tracking data

I have an image which was shown to groups of people with different domain knowledge of its content. I than recorded gaze fixation data of them watching the image.
I now kind of want to compare the results of the two groups - so what I need to know is, if there is a correlation of the positions of the sampling data between the two groups or not.
I have the original image as well as the fixation coords. Do you have any good idea how to start analyzing the data?
It's more about the idea or the plan so you don't have to be too technical on that one.
Thanks
Simple idea: render all the coordinates on the original image in a 'heat map' like way, one image for each group. You can then visually compare the images for correlation, and you have some nice graphics for in your paper.
There is something like the two-dimensional correlation coefficient. With software like R or Matlab you can do the number crunching for the correlation.
Matlab has a function for this:
Two Dimensional Correlation Function: corr2
Computes two dimensional correlation coefficient between two matrices
and the matrices must be of the same size. r = corr2 (A,B)
In gaze tracking, the most interesting data lies in two areas.
In where all people look, for that you can use the heat map Daan suggests. Make a heat map for all people, and heat maps for separate groups of people.
In when people look there. For that I would recommend you start by making heat maps as above, but for short time intervals starting from the time the picture was first shown. Again, for all people, and for the separate groups you have.
The resulting set of heat-maps, perhaps animated for the ones from the second point, should give you some pointers for further analysis.