Interactive pcolor in python - python-2.7

I have two numpy arrays of sizes (Number_of_time_steps, N1, N2). Each one represents velocities in a plane of size N1xN2 for Number_of_time_steps which is 12,000 in my case. These two arrays come from two fluid dynamics simulations in which a point is slightly perturbed at time 0 and I want to study the discrepancies caused by the perturbation in the velocity of each point in the grid. To do so, for each time step, I make a plot with 4 subplots: pcolor map of plane 1, pcolor map of plane 2, difference between the planes, and difference between the planes in log scale. I use matplotlib.pyplot.pcolor to create each subplot.
This is something that can be easily done, but the problem is that I will end up with 12,000 of such plots (saved as .png files on the disk). Instead, I want a kind of interactive plot in which I can enter the time step, and it will update the 4 subplots to the corresponding time step from the values in the two existing arrays.
If somebody has any idea on how to solve this problem, be happy to hear about it.

For interactive graphics, you should look into Bokeh:
http://docs.bokeh.org/en/latest/docs/quickstart.html
You can create a slider that will bring up the time slices you want to see.

If you can run from within ipython, you could just make a function to plot your given timestep
%matplotlib # set the backend
import matplotlib.pyplot as plt
fig,((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col', sharey='row')
def make_plots(timestep):
# Clear the subplots
[ax.cla() for ax in [ax1,ax2,ax3,ax4]]
# Make your plots. Add whatever options you need
ax1.pcolor(array1[timestep])
ax2.pcolor(array2[timestep])
ax3.pcolor(array1[timestep]-array2[timestep])
ax4.pcolor(array1[timestep]-array2[timestep])
# Make axis labels, etc.
ax1.set_xlabel(...) # etc.
# Update the figure
fig.show()
# Plot some timesteps like this
make_plots(0) # time 0
# wait some time, then plot another
make_plots(100) # time 100
make_plots(12000) # time 12000

Related

Creating a colorbar next to my matplotlib plot w/o using the colormap to plot the data

I'm having trouble creating a colorbar for my plot in Python using matplotlib. I am using a colormap, not to colour all the data that I plot but to extract a colour for a plot based on a value I'm not plotting. Hope this makes sense..
So I'm in a for loop, create a plot every time with a colour based on a certain parameter. Like this (the data is an example to create an mwe, my data is more complicated):
import matplotlib as mpl
from matplotlib import pyplot as plt
import numpy as np
xdata = np.array(range(10))
parameter = [0.5, 0.3, 0.78, 0.21, 0.45] #random parameter example
cmap = mpl.cm.get_cmap('jet')
for i in range(len(parameter)):
clr = cmap(parameter(i))
plt.plot(xdata,xdata**i,c=clr)
plt.show()
Now, what I would want is a colorbar on the side (or actually two, but that's another problem I think) that shows the jet colormap and according values. The values need to be scaled to a new min and max value.
So far I've found the following, but I don't understand it enough to apply it to my own problem:
Getting individual colors from a color map in matplotlib
Which told me how to extract the colour and shows how to create the normalized colormap
Colorbar only
Which should tell me how to add a colorbar without using the plotted data, but I don't understand enough of it. My problem is with the creation of the axes. I don't understand this part if I want to put the colorbar next to my plot. In the example they create a figure with handle fig, but in my case the figure is created when I do plt.imshow(image), since this is what I start with and then I'm plotting over the image. I cannot use the fig.add_axes here.
I hope you can help me out here. It would be great if I could also create a 'reversed' colorbar. So either the colours are in reverse direction, or the values next to the bar.
At any point in the script you can get the figure via fig = plt.gcf() and an axes via ax=plt.gca(). So, adding an axes may be done by plt.gcf().add_axes(...).
There is also nothing wrong with putting fig=plt.figure() before plotting anything.
Note that after creating a new axes, plt.gca() will return the new axes, so it is a good idea to create a reference to the main axes before adding a new one.
A convenient way to obtain a figure and an axes for later referencing is to create the figure via
fig, ax = plt.subplots()
Colormaps:
Every standard colormap has a reversed version, which has _r at the end of its name, e.g. you can use viridis_r instead of viridis.

How to make image comparison in openCV more coarse

I am writing a code on raspberry pi in python to compare two images using mean squared error. The project is an personal home security thing.
My main goal is to detect a change between the images that I capture from pi camera(if something is added to the current image or something removed from the image) but right now my code is too sensitive. It is affected by change in background lighting, which I do not want.
I have two options in front of me, to either scrape my current logic and start a new one or improve my current logic to account for these noise(if I can call them that). I am searching for ways to improve my logic but I wanted some guidance on how to go about it.
My biggest fear being, am I wasting time kicking a dead horse or should I just look for some other algorithm to detect a change in image or should I use edge detection
import numpy as np
import cv2
import os
from threading import Thread
######Function Definition########################################
def mse(imageA, imageB):
# the 'Mean Squared Error' between the two images is the
# sum of the squared difference between the two images;
# NOTE: the two images must have the same dimension
err = np.sum((imageA.astype("int") - imageB.astype("int")) ** 2)
err /= int(imageA.shape[0] * imageA.shape[1])
# return the MSE, the lower the error, the more "similar"
# the two images are
return err
def compare_images(imageA, imageB):
# compute the mean squared error
m = mse(imageA, imageB)
print(m)
def capture_image():
##shell command to click photos
os.system(image_args)
##original image Path variable
original_image_path= "/home/pi/Downloads/python-compare-two-images/originalimage.png"
##original_image_args is a shell command to click photos
original_image_args="raspistill -o "+original_image_path+" -w 320 -h 240 -q 50 -t 500"
os.system(original_image_args)
##read the greyscale of the image in to the variable original_image
original_image=cv2.imread(original_image_path, 0)
##Three images
image_args="raspistill -o /home/pi/Downloads/python-compare-two-images/Test_Images/image.png -w 320 -h 240 -q 50 --nopreview -t 10 --exposure sports"
image_path="/home/pi/Downloads/python-compare-two-images/Test_Images/"
image1_name="image.png"
#created a new thread to take pictures
My_Thread=Thread(target=capture_image)
#Thread started
My_Thread.start()
flag = 0
while(True):
if(My_Thread.isAlive()==True):
flag=0
else:
flag=1
if(flag==1):
flag=0
image1 = cv2.imread((image_path+image1_name), 0)
My_Thread=Thread(target=capture_image)
My_Thread.start()
compare_images(original_image, image1)
A first improvement is to adjust a gain to compensate for the global variation of the light. Like taking the average intensity of the two images and correcting one with the ratio of the intensities.
This can fail in case of an change of the foreground, which will influence the global average. If that change in the foreground doesn't have a too large area, you can get an estimate by robust fitting of a linear model y = a.x.
A worse, but unfortunately common, scenario, is when the background illumination changes in a non-uniform way. A partial solution is to try and fit a non-uniform gain model such as one obtained by bilinear interpolation between gains estimated at the corners, or a finer subdivision of the image.
The topic of change detection is a very studied field. One of the basic options is to model each one of the pixels as a Gaussian distribution by sampling a lot of images for each pixel and calculate the mean and variance of each pixel.
For the pixels that tend to change when there is change in lighting the variance of the pixels will be bigger than the ones that don't change as much.
In order to detect movement for a certain pixel you just need to choose what is the probability you consider as an unordarinry change in the pixel value and use the Gaussain distribution you calculated to find what is the corresponding value that is considered unordarinry.
To make this solution efficient for your raspberry pi you will need to first do an "offline" calculation of the values for each pixel that will be the threshold values for which the change in the pixel value is considered movement and store them in a file and than in the "online" sage you will just compare each pixel to the calculated value.
For the "offline" stage i recommend using images that were recorder during the entire day in order to get all the variation you need per pixel. This stage of curse can be done on your computer and only the output file will be uploaded to the raspberry pi

Matplotlib funcAnimation does not open/show plot

I'm trying to learn how to animate plots using matplotlib's built in funcAnimation class. For this example, I just want to generate a 2D scatter plot of randomly distributed normal values and add a point to the plot (animate the points appearing) each time I update the points. The example code is below:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import random
plt.ion()
fig, ax = plt.subplots()
scat = ax.scatter([],[])
scat.axes.axis([-5, 5, -5, 5])
def update(point):
array = scat.get_offsets()
array = np.append(array, point)
scat.set_offsets(array)
return scat
def data_gen():
for _ in range(0,100):
point = np.random.normal(0, 1, 2)
yield point
ani = animation.FuncAnimation(fig, update, data_gen, interval=25, blit=False)
plt.show()
When I run this code, nothing happens. The terminal churns for a few seconds, and then nothing happens.
I'm using this as my guide: http://matplotlib.org/examples/animation/animate_decay.html, and if I use a line plot instead of a scatter plot (essentially just replacing how the points are generated in the generator of this example) it "works" as far as it generates data and updates the plots. But it is not the display I want, I want to see the point appearing on a scatter plot. To use a scatter plot, I need to not use set_data, as that is not a valid method for scatter plots; so I'm using the np.append() method which I've seen in this example:
Dynamically updating plot in matplotlib
So my question is, what am I doing wrong in this code that is causing the animation to not show up?
EDIT: I've just tried/found out that if I add:
mywriter = animation.FFMpegWriter(fps=60)
ani.save('myanimation.mp4',writer=mywriter)
It does produce an mp4 that contains the animation, I just can't get it to dynamically display as the code is running. So please focus on that problem if you are able to diagnose it. Thanks.
For future reference, #ImportanceOfBeingErnest pointed out that plot.ion() is not necessary and is specific to plotting in ipython. Removing that fixes the problem.

Figure Size & Position after Matplotlib Zoom

I have a matplotlib image plot within a wxPython panel that I zoom in on using the native matplotlib toolbar zoom.
Having zoomed in I wish to know the size of the resulting image so that I can calculate the magnification.
Moreover, I wish to know the position/dimensions of my zoomed in image in relation to the original image so that I can re-plot it again at a later time.
I don't know how to approach this. I have looked over documentation for canvas and figure but haven't found anything which would help me pin point the data I require. Thanks for any help.
You may want to read the following from the matplotlib doc:
Event handling and picking
Transformations tutorial
However, especially the transformations tutorial may take a while to wrap your head around. The transformation system is very efficient and complete, but it may take you a while to figure out what especially it is you do need.
However in your case maybe the following code snippet could be sufficient:
from matplotlib import pyplot as plt
import numpy
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(numpy.random.rand(10))
def ondraw(event):
# print 'ondraw', event
# these ax.limits can be stored and reused as-is for set_xlim/set_ylim later
print ax.get_xlim(), ax.get_ylim()
cid = fig.canvas.mpl_connect('draw_event', ondraw)
plt.show()
In the draw event you can get your axes limits, calculate a scaling and whatnot and can use it as-is later on to set the ax to the desired zoom level.

What is the idiom for setting the properties in a simple matplotlib figure?

I'm confused by the relationship among matplotlib figures, axes, and subplots.
Usually, I figure out such things by looking at and experimenting with code, which typically embodies the structural relationship among entities in a object model that can be inferred from examples of what works. But in matplotlib I often find a bewildering array of ways to accomplish the same thing, which obscures the underling structure.
For example, if I want to make a simple (no subfigures) log-log figure, any of the following seem to have exactly the same effect.
import matplotlib.pyplot as plt
# All of the following seem to have the same effect:
plt.axes().loglog()
plt.gca().loglog()
plt.loglog()
plt.gcf().gca().loglog()
# These don't work though:
# plt.gcf().axes().loglog()
# plt.gcf().loglog()
I've tried the documentation and the tutorials, but I'm no wiser having done so.
What does each of the working examples above do? How to they differ? Why do the non-working examples fail? If I'm writing code that I expect others (or me) to be able to read, is one of these idioms preferred over another?
Note that my interest here is in programmatically creating images for publication or export rather than in the interactive creation of figures or in mimicking MATLABs functionality. I gather that some of the "shortcuts" above have to do with making this latter scenario work.
My standard is to get fig, ax from plt.subplots like this:
fig, ax = plt.subplots(1)
ax.loglog(a, b)
I do it this way because then you can also get multiple ax objects as a list, e.g.:
# Make a column of three figures
fig, axes = plt.subplots(3)
for ax, a, b in zip(axes, as, bs):
ax.loglog(a, b)
Or if you do a 2 by 5 grid, you get a list of lists of ax objects, so I usually unlist the list using axes.flat:
# Make a 2x5 grid of figures
nrows = 2
ncols = 5
height = nrows * 4
width = ncols * 4
# Don't ask me why figsize is (width, height) instead of (height, width)....
fig, axes = plt.subplots(nrows=2, ncols=5, figsize=(width, height))
for ax, a, b in zip(axes.flat, as, bs):
ax.loglog(a, b)
I do it this way because then I have the ax object to tweak with the appearance with afterwards. I generally don't use plt.gca() except for internal plotting functions.
plt.gcf() is getting the current figure and when you add gca() or axes() or loglog() to it, I believe they create the underlying axes. I'm not sure why the gcf()-first stuff didn't work with axes() and loglog(). So my advice is to stick to ax objects.
EDIT: removed itertools.chain stuff, swapped to axes.flat
A figure is basically a window or a file. If you make several separate figures, the idea is usually to pop up several widows or save several files.
An axis and a subplot are in some sense the same thing. For example, the figure method subplot returns an axis object. Each axis object represents a specific set of axes that you want to plot something on. Each axis can have several individual data sets plotted on it, but they will all use the same x and y axes.
Making a plot a loglog plot is determined by the function that you use to actually plot the data. For example, if you have two arrays a and b that I want to loglog plot against each other, I would use:
fig=plt.figure() #Make a figure
loglog_ax=fig.subplot(111) # Make a single axis, which is the *only* subplot
loglog_ax.loglog(a,b) # Plot the data on a log-log plot