logarithmic contrast enhancement of an image in python - python-2.7

I want to calculate the logarithmic contrast of an image.
This is the code in python
import cv2
import math
import numpy as np
img=cv2.imread("lena.jpg")
width,height=img.shape[:2]
NewImg=np.zeros_like(img)
InputMax=np.amax(img)
InputMin=np.amin(img)
a=(255.0/(InputMax-InputMin))
b=255-(a*InputMax)
for i in range(width):
for j in range(height):
x=img[i,j]
y=np.array(map(math.log10,x))
NewImg[i,j]=(a*y)+b
print NewImg

Remove the float() in a=(255/float(InputMax-InputMin)) and change it to a=(255.0/InputMax-InputMin) and in the last 3rd line, change it to y=np.array(map(math.log10,x)).
Here:
import cv2
import math
import numpy as np
img=cv2.imread("xyz.jpg")
width,height=img.shape[:2]
NewImg=np.zeros_like(img)
InputMax=np.amax(img)
InputMin=np.amin(img)
a=(255.0/(InputMax-InputMin))
b=255-(a*InputMax)
for i in range(width):
for j in range(height):
x=img[j,i]
y=np.array(map(math.log10,x))
NewImg[j,i]=(a*y)+b
print NewImg

Related

keras error:Error when checking target: expected dense_2 to have shape (2,) but got array with shape (1,)

I have tried to write some example with keras,but some error happenError when checking target: expected dense_2 to have shape (2,) but got array with shape (1,)
I have tried to change the input_shape but it doesn't work
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
import numpy
print "hello"
input=[[1],[2],[3],[4],[5],[6],[7],[8]]
input=numpy.array(input, dtype="float")
# input=input.reshape(8,1)
output=[[1],[0],[1],[0],[1],[0],[1],[0]]
output=numpy.array(output, dtype="float")
(trainx,testx,trainy,testy)=train_test_split(input, output, test_size=0.25, random_state=42)
lb = LabelBinarizer()
trainy=lb.fit_transform(trainy)
testy=lb.transform(testy)
model=Sequential()
model.add(Dense(4,input_shape=(1,),activation="sigmoid"))
# model.add(Dense(4,activation="sigmoid"))
# print len(lb.classes_)
model.add(Dense(len(lb.classes_),activation="softmax",input_shape=(4,)))
INIT_LR = 0.01
EPOCHS = 20
print("[INFO] training network...")
opt = SGD(lr=INIT_LR)
model.compile(loss="categorical_crossentropy", optimizer=opt,metrics=["accuracy"])
H = model.fit(trainx, trainy, validation_data=(testx, testy),epochs=EPOCHS, batch_size=2)
Since you have two classes, you can have a single neuron in the final Dense layer and use sigmoid activation. Or if you want to use softmax, you need to create a one hot encoding of y like this.
(trainx,testx,trainy,testy)=train_test_split(input, output, test_size=0.25, random_state=42)
trainy = keras.utils.to_categorical(trainy, 2)
testy = keras.utils.to_categorical(testy, 2)
You should use "from tensorflow.python.keras.xx" instead of "from keras.xx". It prevents it from receiving the error like: "AttributeError: module 'tensorflow' has no attribute 'get_default_graph"

Disparity map tutorial SADWindowSize and image can not float

I am new to Python and have been following a basic tutorial (https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.html#py-depthmap) for creating a disparity map from two images, but I have had several errors.
I am using Python 2.7, OpenCV 3.3.0, matplotlib 1.3, numpy 1.10.2
This is my code v1:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread("C:\Python27\tsukuba_l.png,0")
imgR = cv2.imread("C:\Python27\tsukuba_r.png,0")
stereo = cv2.StereoBM_create(numDisparities=16, blockSize=15)
disparity = stereo.compute(imgL,imgR)
plt.imshow(disparity,'gray')
plt.show()
I corrected the stereoBM function from the tutorial to match the latest openCV version cv2.createStereoBM to cv2.StereoBM_create and got an error. (-211) SADWindowSize must be odd, be within 5..255 and not be larger than image width or height in function on the 2nd to last line (disparity=...). I tried reducing the block size but there was still an error, I have checked the image pathways are correct and both images are the same size.
I then have attempted to use StereoSGBM_create instead, v2 code:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread("C:\Python27\tsukuba_l.png,0")
imgR = cv2.imread("C:\Python27\tsukuba_r.png,0")
stereo = cv2.StereoSGBM_create(minDisparity=5, numDisparities=16, blockSize=5)
disparity = stereo.compute(imgL,imgR)
plt.imshow(disparity,'gray')
plt.show()
However this returns:
TypeError: Image data can not convert to float.
Any reason why these errors maybe occurring?
Typo error :
change
imgL = cv2.imread("C:\Python27\tsukuba_l.png,0")
imgR = cv2.imread("C:\Python27\tsukuba_r.png,0")
to
imgL = cv2.imread("C:\Python27\tsukuba_l.png",0)
imgR = cv2.imread("C:\Python27\tsukuba_r.png",0)
or better :
imgL = cv2.imread("C:\Python27\tsukuba_l.png",cv2.IMREAD_GRAYSCALE)
imgR = cv2.imread("C:\Python27\tsukuba_r.png",cv2.IMREAD_GRAYSCALE)
from cv2.imread
Warning Even if the image path is wrong, it won’t throw any error,

3D python import error

I installed Netbeans and Python IDE 2.7.1 as instructed in the standard installation guide. I would like to run the following code:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = Axes3D(fig)
for c, z in zip(['r', 'g', 'b', 'y'], [30, 20, 10, 0]):
xs = np.arange(20)
ys = np.random.rand(20)
ax.bar(xs, ys, zs=z, zdir='y', color=c, alpha=0.8)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
After running the code, I am getting the following error message:
ImportError: No module named mpl_toolkits.mplot3d
Also, for almost all programs I have tested, I get the same import error message.
Could someone assist?
With this description it seems that you have an installation problem at hand. Questions:
can you import matplotlib at all (you may, e.g., try to import matplotlib and see if that throws an error)
if you can import matplotlib, what does matplotlib.__version__ say?
which OS, which python (import sys; print sys.version)
First guess is that you do not have matplotlib installed in a way your interpreter would find it.

3D CartoPy similar to Matplotlib-Basemap

I'm new to Python with a question about Cartopy being able to be used in a 3D plot. Below is an example using matplotlibBasemap.
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.basemap import Basemap
m = Basemap(projection='merc',
llcrnrlat=52.0,urcrnrlat=58.0,
llcrnrlon=19.0,urcrnrlon=40.0,
rsphere=6371200.,resolution='h',area_thresh=10)
fig = plt.figure()
ax = Axes3D(fig)
ax.add_collection3d(m.drawcoastlines(linewidth=0.25))
ax.add_collection3d(m.drawcountries(linewidth=0.35))
ax.add_collection3d(m.drawrivers(color='blue'))
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Height')
fig.show()
This creates a map within a 3D axis so that you can plot objects over the surface. But with Cartopy returns a matplotlib.axes.GeoAxesSubplot. Not clear how to take this and add to a 3D figure/axis as above with matplotlib-basemap.
So, can someone give any pointers on how to do a similar 3D plot with Cartopy?
The basemap mpl3d is a pretty neat hack, but it hasn't been designed to function in the described way. As a result, you can't currently use the same technique for much other than simple coastlines. For example, filled continents just don't work AFAICT.
That said, a similar hack is available when using cartopy. Since we can access shapefile information generically, this solution should work for any poly-line shapefile such as coastlines.
The first step is to get hold of the shapefile, and the respective geometries:
feature = cartopy.feature.NaturalEarthFeature('physical', 'coastline', '110m')
geoms = feature.geometries()
Next, we can convert these to the desired projection:
target_projection = ccrs.PlateCarree()
geoms = [target_projection.project_geometry(geom, feature.crs)
for geom in geoms]
Since these are shapely geometries, we then want to convert them to matplotlib paths with:
from cartopy.mpl.patch import geos_to_path
import itertools
paths = list(itertools.chain.from_iterable(geos_to_path(geom)
for geom in geoms))
With paths, we should be able to just create a PathCollection in matplotlib, and add it to the axes, but sadly, Axes3D doesn't seem to cope with PathCollection instances, so we need to workaround this by constructing a LineCollection (as basemap does). Sadly LineCollections don't take paths, but segments, which we can compute with:
segments = []
for path in paths:
vertices = [vertex for vertex, _ in path.iter_segments()]
vertices = np.asarray(vertices)
segments.append(vertices)
Pulling this all together, we end up with a similar result to the basemap plot which your code produces:
import itertools
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
import numpy as np
import cartopy.feature
from cartopy.mpl.patch import geos_to_path
import cartopy.crs as ccrs
fig = plt.figure()
ax = Axes3D(fig, xlim=[-180, 180], ylim=[-90, 90])
ax.set_zlim(bottom=0)
target_projection = ccrs.PlateCarree()
feature = cartopy.feature.NaturalEarthFeature('physical', 'coastline', '110m')
geoms = feature.geometries()
geoms = [target_projection.project_geometry(geom, feature.crs)
for geom in geoms]
paths = list(itertools.chain.from_iterable(geos_to_path(geom) for geom in geoms))
# At this point, we start working around mpl3d's slightly broken interfaces.
# So we produce a LineCollection rather than a PathCollection.
segments = []
for path in paths:
vertices = [vertex for vertex, _ in path.iter_segments()]
vertices = np.asarray(vertices)
segments.append(vertices)
lc = LineCollection(segments, color='black')
ax.add_collection3d(lc)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Height')
plt.show()
On top of this, mpl3d seems to handle PolyCollection well, which would be the route I would investigate for filled geometries, such as the land outline (as opposed to the coastline, which is strictly an outline).
The important step is to convert the paths to polygons, and use these in a PolyCollection object:
concat = lambda iterable: list(itertools.chain.from_iterable(iterable))
polys = concat(path.to_polygons() for path in paths)
lc = PolyCollection(polys, edgecolor='black',
facecolor='green', closed=False)
The complete code for this case would look something like:
import itertools
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection, PolyCollection
import numpy as np
import cartopy.feature
from cartopy.mpl.patch import geos_to_path
import cartopy.crs as ccrs
fig = plt.figure()
ax = Axes3D(fig, xlim=[-180, 180], ylim=[-90, 90])
ax.set_zlim(bottom=0)
concat = lambda iterable: list(itertools.chain.from_iterable(iterable))
target_projection = ccrs.PlateCarree()
feature = cartopy.feature.NaturalEarthFeature('physical', 'land', '110m')
geoms = feature.geometries()
geoms = [target_projection.project_geometry(geom, feature.crs)
for geom in geoms]
paths = concat(geos_to_path(geom) for geom in geoms)
polys = concat(path.to_polygons() for path in paths)
lc = PolyCollection(polys, edgecolor='black',
facecolor='green', closed=False)
ax.add_collection3d(lc)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Height')
plt.show()
To yield:

Backward integration in time using scipy odeint

Is it possible to integrate any Ordinary Differential Equation backward in time
using scipy.integrate.odeint ?
If it is possible, could someone tell me what should be the arguement 'time' in 'odeint.
odeint handles negative values of the t argument. No special treatment is needed.
Here's an example:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def mysys(z, t):
"""A slightly damped oscillator."""
return [z[1] - 0.02*z[0], -z[0]]
if __name__ == "__main__":
# Note that t starts at 0 and goes "backwards"
t = np.linspace(0, -50, 501)
z0 = [1, 1]
sol = odeint(mysys, z0, t)
plt.plot(t, sol)
plt.xlabel('t')
plt.show()
The plot:
You can make a change of variables s = t_0 - t, and integrate the differential equation with respect to s. odeint doesn't do this for you.
It is not necessary to make a change of variables. Here an example:
import math
import numpy
import scipy
import pylab as p
from math import *
from numpy import *
from scipy.integrate import odeint
from scipy.interpolate import splrep
from scipy.interpolate import splev
g1=0.01
g2=0.01
w1=1
w2=1
b1=1.0/20.0
b2=1.0/20.0
b=1.0/20.0
c0=0
c1=0.2
wf=1
def wtime(t):
f=1+c0+c1*cos(2*wf*t)
return f
def dv(y,t):
return array([y[1], -(wtime(t)+g1*w1+g2*w2)*y[0]+w1*y[2]+w2*y[3], w1*y[2]-g1*w1*y[0], w2*y[3]-g2*w2*y[0]])
tv=linspace(100,0,1000)
v1zero=array([1,0,0,0])
v2zero=array([0,1,0,0])
v1s=odeint(dv,v1zero,tv)
v2s=odeint(dv,v2zero,tv)
p.plot(tv,v1s[:,0])
p.show()
I check the result with Wolfram Mathematica (that program can solve backward odes).