I am trying to preform a simple contrast stretch with python skimage, on the image opened with gdal as array of type float32. I first calculate the percentile with:
p2, p98 = np.percentile(arrayF, (P1, P2))
and then try to perform the stretch with:
img_rescale = exposure.rescale_intensity(arrayF, in_range=(p2, p98))
The returned image written to .tiff with GDAL contains only 'ones' and no data.
The cause of the problem might be in data range. For this arrayF it is between 0,0352989 and 1,03559. The script works fine when stretching the array with values 0 - 255.
Here is the function:
def contrastStrecher(Raster1, p1, p2, OutDir, OutName1):
fileNameR1 = Raster1
P1 = p1
P2 =p2
outputPath = OutDir
outputName = OutName1
extension = os.path.splitext(fileNameR1)[1]
raster1 = gdal.Open(fileNameR1, GA_ReadOnly)
colsR1 = raster1.RasterXSize
rowsR1 = raster1.RasterYSize
bandsR1 = raster1.RasterCount
driverR1 = raster1.GetDriver().ShortName
geotransformR1 = raster1.GetGeoTransform()
proj1 = raster1.GetProjection()
bandF = raster1.GetRasterBand(1)
nodataF = bandF.GetNoDataValue()
newnodata = -1.
arrayF = bandF.ReadAsArray().astype("float32")
nodatamaskF = arrayF == nodataF
arrayF[nodatamaskF] = newnodata
p2, p98 = np.percentile(arrayF, (P1, P2))
img_rescale = exposure.rescale_intensity(arrayF, in_range=(p2, p98))
del arrayF
img_rescale[nodatamaskF] = newnodata
driver = gdal.GetDriverByName(driverR1)
outraster = driver.Create(outputPath + outputName + extension, colsR1, rowsR1, 1, gdal.GDT_Float32)
outraster.SetGeoTransform(geotransformR1)
outraster.SetProjection(proj1)
outband = outraster.GetRasterBand(1)
outband.WriteArray(img_rescale)
del img_rescale
outband.FlushCache()
outband.SetNoDataValue(newnodata)
del outraster, outband
I figured out that value of newnodata interferes with the calculation. Previously I assigned a velue of -9999.9 to it and the results were as described above. Now with -1. it seems that the function outputs correct results however I'm not entirely sure of that as the nodata or newnodata value should not be included in calculation.
There is a concept called percentile stretching where everything passed the bottom and top percentile designated will be mapped to 0 and 255 respectively and all pixel values in between will be stretched to improve contrast. I am not sure that is what you want but I believe that is what's done here in this example with downloadable code: https://scikit-image.org/docs/0.9.x/auto_examples/plot_equalize.html
But you many not want for some images any mapping to 0,255 so maybe use argparse() to be able to enter these as parameters or use np.amax and np.amin to designate cut-off points, try writing the images to file and build an algorithm that suits your needs.
Related
I have a sample of images and would like to detect the object among others in the image/video already knowing in advance the real physical dimensions of that object. I have one of the image sample (its airplane door) and would like to find the window in the airplane door knowing its physical dimensions(let we say it has inner radius of 20cm and out radius of 23cm) and its real world position in the door (for example its minimal distance to the door frame is 15cm) .Also I can know prior my camera resolution. Any matlab code or OpenCV C++ that can do that automatically with image processing?
Here is my image sample
And more complex image with round logos.
I run the code for second complex image and do not get the same results. Here is the image result.
You are looking for a circle in the image so i suggest you use Hough circle transform.
Convert image to gray
Find edges in the image
Use Hugh circle transform to find circles in the image.
For each candidate circle sample the values of the circle and if the values corresponds to a predefined values accept.
The code:
clear all
% Parameters
minValueWindow = 90;
maxValueWindow = 110;
% Read file
I = imread('image1.jpg');
Igray = rgb2gray(I);
[row,col] = size(Igray);
% Edge detection
Iedge = edge(Igray,'canny',[0 0.3]);
% Hough circle transform
rad = 40:80; % The approximate radius in pixels
detectedCircle = {};
detectedCircleIndex = 1;
for radIndex=1:1:length(rad)
[y0detect,x0detect,Accumulator] = houghcircle(Iedge,rad(1,radIndex),rad(1,radIndex)*pi/2);
if ~isempty(y0detect)
circles = struct;
circles.X = x0detect;
circles.Y = y0detect;
circles.Rad = rad(1,radIndex);
detectedCircle{detectedCircleIndex} = circles;
detectedCircleIndex = detectedCircleIndex + 1;
end
end
% For each detection run a color filter
ang=0:0.01:2*pi;
finalCircles = {};
finalCircleIndex = 1;
for i=1:1:detectedCircleIndex-1
rad = detectedCircle{i}.Rad;
xp = rad*cos(ang);
yp = rad*sin(ang);
for detectedPointIndex=1:1:length(detectedCircle{i}.X)
% Take each detected center and sample the gray image
samplePointsX = round(detectedCircle{i}.X(detectedPointIndex) + xp);
samplePointsY = round(detectedCircle{i}.Y(detectedPointIndex) + yp);
sampleValueInd = sub2ind([row,col],samplePointsY,samplePointsX);
sampleValueMean = mean(Igray(sampleValueInd));
% Check if the circle color is good
if(sampleValueMean > minValueWindow && sampleValueMean < maxValueWindow)
circle = struct();
circle.X = detectedCircle{i}.X(detectedPointIndex);
circle.Y = detectedCircle{i}.Y(detectedPointIndex);
circle.Rad = rad;
finalCircles{finalCircleIndex} = circle;
finalCircleIndex = finalCircleIndex + 1;
end
end
end
% Find Main circle by merging close hyptosis together
for finaCircleInd=1:1:length(finalCircles)
circleCenter(finaCircleInd,1) = finalCircles{finaCircleInd}.X;
circleCenter(finaCircleInd,2) = finalCircles{finaCircleInd}.Y;
circleCenter(finaCircleInd,3) = finalCircles{finaCircleInd}.Rad;
end
[ind,C] = kmeans(circleCenter,2);
c = [length(find(ind==1));length(find(ind==2))];
[~,maxInd] = max(c);
xCircle = median(circleCenter(ind==maxInd,1));
yCircle = median(circleCenter(ind==maxInd,2));
radCircle = median(circleCenter(ind==maxInd,3));
% Plot circle
imshow(Igray);
hold on
ang=0:0.01:2*pi;
xp=radCircle*cos(ang);
yp=radCircle*sin(ang);
plot(xCircle+xp,yCircle+yp,'Color','red', 'LineWidth',5);
The resulted image:
Remarks:
For other images will still have to fine tune several parameters like the radius that you search for the color and Hough circle threshold and canny edge thresholds.
In the function i searched for circle with radius from 40 pixels to 80. In here you can use your prior information about the real world radius of the window and the resolution of the camera. If you know approximately the distance the camera was from the airplane and the resolution of the camera and also the window radius in cm you can use this to get the radius in pixels and use this for the hough circle transform.
I wouldn't worry too much about the exact geometry and calibration and rather find the window by its own characteristics.
Binarization works relatively well, be it on the whole image or in a large region of interest.
Then you can select the most likely blob based on it approximate area and/or circularity.
tl;dr: When animating a model, each joint moves correctly, but not relative to its parent joint.
I am working on a skeletal animation system using a custom built IQE loader and renderer in Lua. Nearly everything is working at this point, except the skeleton seems to be disjointed when animating. Each joint translates, rotates, and scales correctly but is not taking the position of its parent into account, thus creating some awful problems.
In referencing the IQM spec and demo, I cannot for the life of me find out what is going wrong. My Lua code is (as far as I can tell) identical to the reference C++.
Calculating Base Joint Matrices:
local base = self.active_animation.base
local inverse_base = self.active_animation.inverse_base
for i, joint in ipairs(self.data.joint) do
local pose = joint.pq
local pos = { pose[1], pose[2], pose[3] }
local rot = matrix.quaternion(pose[4], pose[5], pose[6], pose[7])
local scale = { pose[8], pose[9], pose[10] }
local m = matrix.matrix4x4()
m = m:translate(pos)
m = m:rotate(rot)
m = m:scale(scale)
local inv = m:invert()
if joint.parent > 0 then
base[i] = base[joint.parent] * m
inverse_base[i] = inv * inverse_base[joint.parent]
else
base[i] = m
inverse_base[i] = inv
end
end
Calculating Animation Frame Matrices
local buffer = {}
local base = self.active_animation.base
local inverse_base = self.active_animation.inverse_base
for k, pq in ipairs(self.active_animation.frame[self.active_animation.current_frame].pq) do
local joint = self.data.joint[k]
local pose = pq
local pos = { pose[1], pose[2], pose[3] }
local rot = matrix.quaternion(pose[4], pose[5], pose[6], pose[7])
local scale = { pose[8], pose[9], pose[10] }
local m = matrix.matrix4x4()
m = m:translate(pos)
m = m:rotate(rot)
m = m:scale(scale)
local f = matrix.matrix4x4()
if joint.parent > 0 then
f = base[joint.parent] * m * inverse_base[k]
else
f = m * inverse_base[k]
end
table.insert(buffer, f:to_vec4s())
end
The full code is here for further examination. The relevant code is in /libs/iqe.lua and is near the bottom in the functions IQE:buffer() and IQE:send_frame(). This code runs on a custom version of the LOVE game framework, and a Windows binary (and batch file) is included.
Final note: Our matrix code has been verified against other implementations and several tests.
Transformations of parent bones should effect transformations of their children. Indeed, this is achieved, by specifying transformation of particular bone in a frame of it's parent. So, usually transformation of bones are specified in their local coordinate system, that depends on it's parent. If any of the parents transformed, this transformation would effect all children, even if their local transformations didn't changed.
In your case, you once cache all absolute (relative to the root, to be precise) transformation of each node. Then you update local transforms of each node using the cache, and do not update your cache. So, how the change of local transform of a node would effect it's child, if when you update the child you use cache instead of the actual parent transform?
There is one more issue. Why you do the following thing?
f = base[joint.parent] * m * inverse_base[k]
I mean, usually it would be just:
f = base[joint.parent] * m
I guess, that transformations recorded in animation is absolute (relative to the root, to be precise). It is very strange. Usually every transformation is local. Check this issue, because this will add you lots of problems.
More over, I don't see any need to cache something in your case (except of inverse_base, that is usually not needed).
Change your IQE:send_frame() function as follows:
local buffer = {}
local transforms = {}
local inverse_base = self.active_animation.inverse_base
for k, pq in ipairs(self.active_animation.frame[self.active_animation.current_frame].pq) do
local joint = self.data.joint[k]
local pose = pq
local pos = { pose[1], pose[2], pose[3] }
local rot = matrix.quaternion(pose[4], pose[5], pose[6], pose[7])
local scale = { pose[8], pose[9], pose[10] }
local m = matrix.matrix4x4()
m = m:translate(pos)
m = m:rotate(rot)
m = m:scale(scale)
local f = matrix.matrix4x4()
if joint.parent > 0 then
transforms[k] = transforms[joint.parent] * m
f = transforms[k] * inverse_base[k]
else
f = m * inverse_base[k]
transforms[k] = m
end
table.insert(buffer, f:to_vec4s())
end
This works good for me. Try to get rid of the inverse_base and you would be able to remove all animation related code from your IQE:buffer() function
P.S. Usually, all nodes are updated by traversing down the tree. However, you update nodes by going though the list. You should be aware, that you must guarantee somehow that for any node it's children would go after it.
Can anyone tell me the attribute to set to label the units on a LinePlot in ReportLab? Also, if you know how to set a title, that would be ultimately super helpful.
drawing = Drawing(50,50)
data = [(tuple(zip(matfile['chan'].item([6],matfile['chan'].item()[7].item()[0])))]
lp = LinePlot()
lp.data = data
lp.????? = (str(matfile['chan'].item()[3]), str(matfile['chan'].item()[2]))
drawing.add(lp)
elements.append(drawing)
This is actually going to be inside a loop - I load a .mat file and there's about 50 channels, and I am going to plot almost all of them. Separately. But first I need to get a handle on assigning the labels (title text, which will be the same as the channel, then the units for the axes...) X Axis label should always be 'Seconds', Y axis label will vary... sometimes a %, sometimes a pressure or temperature or speed, etc.
I have no idea how to do THAT, but I ended up using framing tables and I cobbled together something.I did not succeed in rotating the text for the y axis label.
for channel in channels:
drawing = Drawing(0,0)
data = [(tuple(zip(matfile[channel].item()[6],matfile[channel].item()[7].item()[0])))]
lp = LinePlot()
lp.data = data
lp.width = 6*inch
lp.height = 3.25*inch
stylesheet = getSampleStyleSheet()
y_label = Paragraph(str(matfile[channel].item()[2]), stylesheet['Normal'])
drawing.add(lp)
plot_table = [['',str(channel)],
[y_label, drawing],
['',matfile[channel].item()[3]]]
t_framing_table = Table(plot_table)
t_framing_table._argH[1] = lp.height + .5*inch
t_framing_table._argW[1] = lp.width
elements.append(t_framing_table)
if break_page:
elements.append(PageBreak())
break_page = False
else:
break_page = True
Or, perhaps, not warped enough...
So I'm trying to take an image and specify four corners - then move those four corners into a (near-)perfect square in the middle of the image.
UPDATE 1 (At bottom): 9/10/13 # 8:20PM GMT The math & matrix tags were added with this update. (If your read this update before 8:20, I must apologize, I gave you really bad info!)
I don't need it super-accurate, but my current results are very clearly not working, yet after looking at multiple other examples I cannot see what I've been doing wrong.
Here is my mock-up:
And through some magical process I obtain the coordinates. For this 640x480 mock-up the points are as follows:
Corners:
Upper-Left: 186, 87
Upper-Right: 471, 81
Lower-Left: 153, 350
Lower-Right: 500, 352
And the points I want to move the corners to are as follows:
Upper-Left: 176, 96
Upper-Right: 464, 96
Lower-Left: 176, 384
Lower-Right: 464, 384
Now, the end-goal here is to get the coordinates of the black dot-thing relative to the corners. I'm not making the box fill up the entire image because that point in the center could be outside the box given a different picture, so I want to keep enough "outside the box" room to generalize the process. I know I can get the point after it's moved, I'm just having trouble moving it correctly. My current warpPerspective attempts provide the following result:
Ok, so it looks like it's trying to fit things properly, but the corners didn't actually end up where we thought they would. The top-left is too far to the right. The Bottom-Left is too high, and the two on the right are both too far right and too close together. Well, ok... so lets try expanding the destination coordinates so it fills up the screen.
Just seems zoomed in? Are my coordinates somehow off? Do I need to feed it new coordinates? Source, destination, or both?
Here's my code: (This is obviously edited down to the key pieces of information, but if I missed something, please ask me to re-include it.)
Mat frame = inputPicture;
Point2f sourceCoords[4], destinationCoords[4];
// These values were pre-determined, and given above for this image.
sourceCoords[0].x = UpperLeft.X;
sourceCoords[0].y = UpperLeft.Y;
sourceCoords[1].x = UpperRight.X;
sourceCoords[1].y = UpperRight.Y;
sourceCoords[2].x = LowerLeft.X;
sourceCoords[2].y = LowerLeft.Y;
sourceCoords[3].x = LowerRight.X;
sourceCoords[3].y = LowerRight.Y;
// We need to make a square in the image. The 'if' is just in case the
// picture used is not longer left-to-right than it is top-to-bottom.
int top = 0;
int bottom = 0;
int left = 0;
int right = 0;
if (frame.cols >= frame.rows)
{
int longSideMidpoint = frame.cols/2.0;
int shortSideFifthpoint = frame.rows/5.0;
int shortSideTenthpoint = frame.rows/10.0;
top = shortSideFifthpoint;
bottom = shortSideFifthpoint*4;
left = longSideMidpoint - (3*shortSideTenthpoint);
right = longSideMidpoint + (3*shortSideTenthpoint);
}
else
{
int longSideMidpoint = frame.rows/2.0;
int shortSideFifthpoint = fFrame.cols/5.0;
int shortSideTenthpoint = frame.cols/10.0;
top = longSideMidpoint - (3*shortSideTenthpoint);
bottom = longSideMidpoint + (3*shortSideTenthpoint);
left = shortSideFifthpoint;
right = shortSideFifthpoint*4;
}
// This code was used instead when putting the destination coords on the edges.
//top = 0;
//bottom = frame.rows-1;
//left = 0;
//right = frame.cols-1;
destinationCoords[0].y = left; // UpperLeft
destinationCoords[0].x = top; // UL
destinationCoords[1].y = right; // UpperRight
destinationCoords[1].x = top; // UR
destinationCoords[2].y = left; // LowerLeft
destinationCoords[2].x = bottom; // LL
destinationCoords[3].y = right; // LowerRight
destinationCoords[3].x = bottom; // LR
Mat warp_matrix = cvCreateMat(3, 3, CV_32FC1);
warp_matrix = getPerspectiveTransform(sourceCoords, destinationCoords); // This seems to set the warp_matrix to 3x3 even if it isn't.
warpPerspective(frame, frame, warp_matrix, frame.size(), CV_INTER_LINEAR, 0);
IplImage *savePic = new IplImage(frame);
sprintf(fileName, "test/%s/photo%i-7_warp.bmp", startupTime, Count);
cvSaveImage(fileName, savePic);
delete savePic;
I've also tried using perspectiveTransform, but that lead to the error:
OpenCV Error: Assertion failed (scn + 1 == m.cols && (depth == CV_32F
|| depth == CV_64F)) in unknown function, file
C:\slace\builds\WinInstallerMegaPack\src\opencv\modules\core\src\matmul.cpp,
line 1926
which lead me to trying getHomography leading to THIS error instead:
OpenCV Error: Assertion failed (npoints >= 0 && points2.checkVector(2)
== npoints && points1.type()) in unknown function, file C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\calib3d\src\fundam.cpp,
line 1074
(I checked - npoints IS greater than zero, because it equals points1.checkVector(2) - it's failing because points1.checkVector(2) and points2.checkVector(2) don't match - but I don't understand what checkVector(2) does. points1 and points2 are taken from the coordinates I tried feeding getHomography - which were the same coordinates as above.
Any idea how to get the output I'm looking for? I've been left confused for a while now. :/
^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^
UPDATE 1:
Ok, so I've figured out how it's supposed to calculate things.
getPerspectiveTransform is supposed to find a 3x3 matrix such that:
|M00 M10 M20| |X| |c*X'|
|M01 M11 M21| * |Y| = |c*Y'|
|M02 M12 M22| |1| | c |
Where MXX is the constant matrix, X & Y are the input coordinates, and X' & Y' are the output coordinates. This has to work for all four input/output coordinate combos. (The idea is simple, even if the math to get there may not be - I'm still not sure how they're supposed to get that matrix... input here would be appreciated - since I only need one actual coordinate I would not mind just bypassing getPerspectiveTransform and WarpPerspective entirely and just using a mathematical solution.)
After you've gotten the perspective transform matrix, WarpPerspective basically just moves each pixel's coordinates by multiplying:
|M00 M10 M20| |X|
|M01 M11 M21| * |Y|
|M02 M12 M22| |1|
for each coordinate. Then dividing cX' & cY' both by c (obviously). Then some averaging needs to be done since the results are unlikely to be perfect integers.
Ok, the basic idea is easy enough but here's the problem; getPerspectiveTransform does not seem to be working!
In my above example I modified the program to print out the matrix it was getting from getPerspectiveTransform. It gave me this:
|1.559647 0.043635 -37.808761|
|0.305521 1.174385 -50.688854|
|0.000915 0.000132 1.000000|
In my above example, I gave for the upper-left coordinate 186, 87 to be moved to 176, 96. Unfortunately when I multiply the above warp_matrix with the input coordinate (186, 87) I get not 176, 96 - but 217, 92! Which is the same result WarpPerspective gets. So at least we know WarpPerspective is working...
Um, Alex...
Your X & Y coordinates are reversed on both your Source and Destination coordinates.
In other words:
sourceCoords[0].x = UpperLeft.X;
sourceCoords[0].y = UpperLeft.Y;
should be
sourceCoords[0].x = UpperLeft.Y;
sourceCoords[0].y = UpperLeft.X;
and
destinationCoords[0].y = top;
destinationCoords[0].x = left;
should be
destinationCoords[0].y = left;
destinationCoords[0].x = top;
Jus' thought you'd like to know.
I'm stuck on a simple problem (I just started using the lib):
What is the most effective way to create sets of objects, replicate them and transform them?
Creating a bunch of circles around a center point:
var height = 150,
width = 150,
branchRadius = 20,
centerX = width / 2,
centerY = height / 2,
treeCrownCenterOffset = 10,
treeCrownRotationCenterX = centerX,
treeCrownRotationCenterY = centerY - treeCrownCenterOffset,
rotationCenter = [treeCrownRotationCenterX , treeCrownRotationCenterY],
paper = Raphael('logo', height , width ),
outerCircle = paper.circle(treeCrownRotationCenterX, treeCrownRotationCenterY, branchRadius).attr({fill:'#F00'}),
innerCircle = paper.circle(treeCrownRotationCenterX, treeCrownRotationCenterY, branchRadius - 4).attr({fill:'#000'}),
branch = paper.set([outerCircle, innerCircle]),
crown = paper.set(),
numCircles = 8, rotationStep = 360 / numCircles, i = 0;
for(; i < numCircles - 1 ; i++) {
//Cloning a branch and pushing it to the crown after transforming it
crown.push(branch.clone().transform('r' + ((i + 1) * rotationStep) + ',' + rotationCenter));
}
//Putting a second crown 200px right of the first crown
//Yes, I'm building a tree :-) No trunk at this point
crown.clone().transform('t200,0');
If you like violin, here is the fiddle.
This is my naive code, thinking a set of cloned sets (the crown of cloned branches) will indeed be moved to position (200, 0), next to the first crown.
It doesn't work: looks like a cloned set of cloned elements cannot be moved:
crown.clone().transform('t200,0');
Not much happens when this line is executed.
Seems like "cloning" is not doing what I expect, and that the transformations are not carried to the second (cloned) collection of objects.
The basic question is:
How does one go about creating reusable objects with Raphael?
Thanks.
You are cloning the set, but since your canvas is only 150px wide, translating it by 200 pixels is sending it off the reservation :)
When you do expand the size of the canvas, however, you will see that only one circle appears to have been cloned. This is not the case. The problem is not with the cloning but the transformation.
I find transformations to be a huge headache. The line "crown.clone().transform('t200,0');" is applying that transformation to each object in the set, but I believe it is overriding the rotation. Even if it weren't, it would be applying the translation after the rotating, sending the circles scattering as if by centrifugal force.
I know you wanted to avoid looping through the cloned set, but this works:
var crown2 = crown.clone();
for (i = 0; i < crown2.length; i ++) {
crown2[i].transform('t200,0r' + (i * rotationStep) + ',' + rotationCenter);
}
Also, note that you didn't add the original branch to the set. You need this:
branch = paper.set([outerCircle, innerCircle]),
crown = paper.set(),
numCircles = 8, rotationStep = 360 / numCircles, i = 0;
//ADD ORIGINAL BRANCH TO SET
crown.push(branch);
Updated fiddle.