Pyqtgraph clip line - python-2.7

i'm trying to plot a smith chart in pyqtgraph. I would like to know if there is a method to clip
the ellipse items representing the imaginary circles with the real circle having radius = 1.
This is what i've done so far: i used the methods start angle e span angle of
QGraphicsEllipse, but in this way a plot also the vertical and horizontal lines of the circle.
There's a method in matplotlib called set_clip_path(), do you know if there is something like this in pyqtgraph?
import pyqtgraph as pg
plot = pg.plot()
plot.setAspectLocked()
plot.addLine(y=0)
#vector for real circle
rline = [0.2, 0.5, 1.0, 2.0, 5.0]
#vector for imaginary
xline = [0.2, 0.5, 1, 2, 5]
circle1 = pg.QtGui.QGraphicsEllipseItem(1, -1, -2, 2)
circle1.setPen(pg.mkPen(1))
plot.addItem(circle1)
for r in rline:
raggio = 1./(1+r)
circle = pg.QtGui.QGraphicsEllipseItem(1, -raggio, -raggio*2, raggio*2)
circle.setPen(pg.mkPen(0.2))
plot.addItem(circle)
for x in xline:
#printing the imaginary circle
circle = pg.QtGui.QGraphicsEllipseItem(x + 1, 0, -x*2, x*2)
circle.setPen(pg.mkPen(0.2))
circle.setStartAngle(1440)
circle.setSpanAngle(1440)
plot.addItem(circle)
EDIT
That's my final code
plot.setAspectLocked()
plot.setXRange(-1,1, padding = 0)
plot.setYRange(-1,1, padding = 0)
#plot.addLine(y=0)
rline = [0.2, 0.5, 1.0, 2.0, 5.0]
xline = [0.2, 0.5, 1, 2, 5]
circle1 = pg.QtGui.QGraphicsEllipseItem(1, -1, -2, 2)
circle1.setPen(pg.mkPen('w', width=0))
circle1.setFlag(circle1.ItemClipsChildrenToShape)
plot.addItem(circle1)
pathItem = pg.QtGui.QGraphicsPathItem()
path = pg.QtGui.QPainterPath()
path.moveTo(1, 0)
for r in rline:
raggio = 1./(1+r)
path.addEllipse(1, -raggio, -raggio*2, raggio*2)
for x in xline:
path.arcTo(x + 1, 0, -x*2, x*2, 90, -180)
path.moveTo(1, 0)
path.arcTo(x + 1, 0, -x*2, -x*2, 270, 180)
pathItem.setPath(path)
pathItem.setPen(pg.mkPen('g', width = 0.2))
pathItem.setParentItem(circle1)
`

Clipping is supported, but probably not the best option. A few possibilities:
Use QGraphicsPathItem combined with QPainterPath.arcTo to draw arcs without radial lines. This would also allow you to add multiple arcs to a single item rather than adding many items, which should improve performance.
Use Something like PlotCurveItem or arrayToQPath to manually draw your own arcs. If you use the connect argument, you'll again be able to generate multiple separate arcs on a single item.
Clipping is handled by Qt; see QGraphicsItem.itemClipsToShape and QGraphicsItem.itemClipsChildrenToShape. Beware: if you use this, you must set the pen width of the clipping object to 0 (Qt only partially supports cosmetic pens with width > 0). Example:
import pyqtgraph as pg
plot = pg.plot()
e1 = pg.QtGui.QGraphicsEllipseItem(0, 0, 4, 4)
# MUST have width=0 here, or use a non-cosmetic pen:
e1.setPen(pg.mkPen('r', width=0))
e1.setFlag(e1.ItemClipsChildrenToShape)
plot.addItem(e1)
e2 = pg.QtGui.QGraphicsEllipseItem(2, 2, 4, 4)
e2.setPen(pg.mkPen('g'))
e2.setParentItem(e1)

Related

PTB-OpengGL stereo rendering and eye seperation value

I tried to draw a 3D dot cloud using OpenGL asymmetric frustum parallel axis projection. The general principle can be found on this website(http://paulbourke.net/stereographics/stereorender/#). But the problem now is that when I use real eye separation(0.06m), my eyes do not fuse well. When using eye separation = 1/30 * focal length, there is no pressure. I don’t know if there is a problem with the calculation, or there is a problem with the parameters? Part of the code is posted below. Thank you all.
for view = 0:stereoViews
% Select 'view' to render (left- or right-eye):
Screen('SelectStereoDrawbuffer', win, view);
% Manually reenable 3D mode in preparation of eye draw cycle:
Screen('BeginOpenGL', win);
% Set the eye seperation:
eye = 0.06; % in meter
% Caculate the frustum shift at the near plane:
fshift = 0.5 * eye * depthrangen/(vdist/100); % vdist is the focal length, 56cm, 0.56m
right_near = depthrangen * tand(FOV/2); % depthrangen is the depth of the near plane, 0.4. %FOV is the field of view, 18°
left_near = -right_near;
top_near = right_near* aspectr;
bottom_near = -top_near;
% Setup frustum projection for this eyes 'view':
glMatrixMode(GL.PROJECTION)
glLoadIdentity;
eyeside = 1+(-2*view); % 1 for left eye, -1 for right eye
glFrustum(left_near + eyeside * fshift, right_near + eyeside * fshift, bottom_near, top_near, %depthrangen, depthrangefObj);
% Setup camera for this eyes 'view':
glMatrixMode(GL.MODELVIEW);
glLoadIdentity;
gluLookAt(0 - eyeside * 0.5 * eye, 0, 0, 0 - eyeside * 0.5 * eye, 0, -1, 0, 1, 0);
% Clear color and depths buffers:
glClear;
moglDrawDots3D(win, xyz(:,:,iframe), 10, [], [], 1);
moglDrawDots3D(win, xyzObj(:,:,iframe), 10, [], [], 1);
% Manually disable 3D mode before calling Screen('Flip')!
Screen('EndOpenGL', win);
% Repeat for other eyes view if in stereo presentation mode...
end

corner detection in complex image

I would like to find a fixed point in pictures like the above for latter comparison and i thought of taking the upper left corner of the board. I tried some things but the result is shown with the green dot. I would like to find a way to take that dot in the corner of the board, not above. I also want to make that point to be the same in a set of pictures of the same board but with some change in orientation maybe. I am using python 2.7
code i have tried so far:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (3, 3), 0)
edged = cv2.Canny(blurred, 10, 200)
edged = cv2.dilate(edged, None, iterations=6)
edged = cv2.erode(edged, None, iterations=6)
(contourss, _) = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
contourss = sorted(contourss, key=cv2.contourArea, reverse=True)[:10]
cv2.drawContours(image, contourss[0], -1, (0, 255, 0), 2)
rect1 = cv2.minAreaRect(contourss[0])
box1 = cv2.cv.BoxPoints(rect1)
box1 = np.int0(box1)
topleftPer=[]
for i in box1[1]:
topleftPer.append(i)
pt = (topleftPer[0], topleftPer[1])
cv2.circle(image, pt, 5, (0, 255, 0), -1)
Always amazing to see how people want to rely on edge detection. Edge detection is so unreliable !
This image is easy to binarize. Find the black pixel with the smallest value of x+y and place a small ROI around this pixel. Then use the leftmost and topmost coordinates.

Opengl rotation matrix pivot animation

1, 0, 0, 0,
0, cos(theta), -sin(theta), 0,
0, sin(theta), cos(theta), 0,
0, 0, 0, 1;
I'm trying to create a 'swinging' animation using a rectangular prism. The animation is very basic: The prism is going to swing back and forth, like the arms of this robot toy. I need to use the above matrix.
I just need help figuring out a series of values for theta that can be plugged into this matrix in order to cause the rectangular prism it will be applied to to swing back and forth, like in the image linked to above.
You may want to use lerping (linear-interpolation) to get a smooth animation. Its hard to tell from the image but min/max pair of -35 and +35 degrees might do the trick.
Edit: Lerping
Using a value between 0 and 1 with a small increment will give you incremental positions inbetween your min and max values, called a and b in the formula below...
0 >= t <= 1
x = b * t + (1 - t) * a

Opengl: Keeping Arcball camera up-vector alligned with y-axis

I'm essentially trying to mimic the way the camera rotates in Maya. The arcball in Maya is always aligned with the with the y-axis. So no matter where the up-vector is pointing, it's still rotated or registered with it's up-vector along the y-axis.
I've been able to implement is arcball in OpenGL using C++ and Qt. But I can't figure out how to keep it's up-vector aligned. I've been able to keep it aligned at times by my code below:
void ArcCamera::setPos (Vector3 np)
{
Vector3 up(0, 1, 0);
Position = np;
ViewDir = (ViewPoint - Position); ViewDir.normalize();
RightVector = ViewDir ^ up; RightVector.normalize();
UpVector = RightVector ^ ViewDir; UpVector.normalize();
}
This works up until the position is at 90-degrees, then the right vector changes and everything is inverted.
So instead I've been maintaining the total rotation (in quaternions) and rotating the original positions (up, right, pos) by it. This works best to keep everything coherent, but now I simply can't align the up-vector to the y-axis. Below is the function for the rotation.
void CCamera::setRot (QQuaternion q)
{
tot = tot * q;
Position = tot.rotatedVector(PositionOriginal);
UpVector = tot.rotatedVector(UpVectorOriginal);
UpVector.normalize();
RightVector = tot.rotatedVector(RightVectorOriginal);
RightVector.normalize();
}
The QQuaternion q is generated from the axis-angle pair derived from the mouse drag. I'm confident this is done correctly. The rotation itself is fine, it just doesn't keep the orientation aligned.
I've noticed in my chosen implementation, dragging in the corners provides a rotation around my view direction, and I can always realign the up-vector to straighten out to the world's y-axis direction. So If I could figure out how much to roll I could probably do two rotations each time to make sure it's all straight. However, I'm not sure how to go about this.
The reason this isn't working is because Maya's camera manipulation in the viewport does not use an arcball interface. What you want to do is Maya's tumble command. The best resource I've found for explaining this is this document from Professor Orr's Computer Graphics class.
Moving the mouse left and right corresponds to the azimuth angle, and specifies a rotation around the world space Y axis. Moving the mouse up and down corresponds to the elevation angle, and specifies a rotation around the view space X axis. The goal is to generate the new world-to-view matrix, then extract the new camera orientation and eye position from that matrix, based on however you've parameterized your camera.
Start with the current world-to-view matrix. Next, we need to define the pivot point in world space. Any pivot point will work to begin with, and it can be simplest to use the world origin.
Recall that pure rotation matrices generate rotations centered around the origin. This means that to rotate around an arbitrary pivot point, you first translate to the origin, perform the rotation, and translate back. Remember also that transformation composition happens from right to left, so the negative translation to get to the origin goes on the far right:
translate(pivotPosition) * rotate(angleX, angleY, angleZ) * translate(-pivotPosition)
We can use this to calculate the azimuth rotation component, which is a rotation around the world Y axis:
azimuthRotation = translate(pivotPosition) * rotateY(angleY) * translate(-pivotPosition)
We have to do a little additional work for the elevation rotation component, because it happens in view space, around the view space X axis:
elevationRotation = translate(worldToViewMatrix * pivotPosition) * rotateX(angleX) * translate(worldToViewMatrix * -pivotPosition)
We can then get the new view matrix with:
newWorldToViewMatrix = elevationRotation * worldToViewMatrix * azimuthRotation
Now that we have the new worldToView matrix, we're left with having to extract the new world space position and orientation from the view matrix. To do this, we want the viewToWorld matrix, which is the inverse of the worldToView matrix.
newOrientation = transpose(mat3(newWorldToViewMatrix))
newPosition = -((newOrientation * newWorldToViewMatrix).column(3))
At this point, we have the elements separated. If your camera is parameterized so that you're only storing a quaternion for your orientation, you just need to do the rotation matrix -> quaternion conversion. Of course, Maya is going to convert to Euler angles for display in the channel box, which will be dependent on the camera's rotation order (note that the math for tumbling doesn't change when the rotation order changes, just the way that the rotation matrix -> Euler angles conversion is done).
Here's a sample implementation in Python:
#!/usr/bin/env python
import numpy as np
from math import *
def translate(amount):
'Make a translation matrix, to move by `amount`'
t = np.matrix(np.eye(4))
t[3] = amount.T
t[3, 3] = 1
return t.T
def rotateX(amount):
'Make a rotation matrix, that rotates around the X axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[1, 0, 0, 0],
[0, c,-s, 0],
[0, s, c, 0],
[0, 0, 0, 1],
])
def rotateY(amount):
'Make a rotation matrix, that rotates around the Y axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[c, 0, s, 0],
[0, 1, 0, 0],
[-s, 0, c, 0],
[0, 0, 0, 1],
])
def rotateZ(amount):
'Make a rotation matrix, that rotates around the Z axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[c,-s, 0, 0],
[s, c, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
])
def rotate(x, y, z, pivot):
'Make a XYZ rotation matrix, with `pivot` as the center of the rotation'
m = rotateX(x) * rotateY(y) * rotateZ(z)
I = np.matrix(np.eye(4))
t = (I-m) * pivot
m[0, 3] = t[0, 0]
m[1, 3] = t[1, 0]
m[2, 3] = t[2, 0]
return m
def eulerAnglesZYX(matrix):
'Extract the Euler angles from an ZYX rotation matrix'
x = atan2(-matrix[1, 2], matrix[2, 2])
cy = sqrt(1 - matrix[0, 2]**2)
y = atan2(matrix[0, 2], cy)
sx = sin(x)
cx = cos(x)
sz = cx * matrix[1, 0] + sx * matrix[2, 0]
cz = cx * matrix[1, 1] + sx * matrix[2, 1]
z = atan2(sz, cz)
return np.array((x, y, z),)
def eulerAnglesXYZ(matrix):
'Extract the Euler angles from an XYZ rotation matrix'
z = atan2(matrix[1, 0], matrix[0, 0])
cy = sqrt(1 - matrix[2, 0]**2)
y = atan2(-matrix[2, 0], cy)
sz = sin(z)
cz = cos(z)
sx = sz * matrix[0, 2] - cz * matrix[1, 2]
cx = cz * matrix[1, 1] - sz * matrix[0, 1]
x = atan2(sx, cx)
return np.array((x, y, z),)
class Camera(object):
def __init__(self, worldPos, rx, ry, rz, coi):
# Initialize the camera orientation. In this case the original
# orientation is built from XYZ Euler angles. orientation is the top
# 3x3 XYZ rotation matrix for the view-to-world matrix, and can more
# easily be thought of as the world space orientation.
self.orientation = \
(rotateZ(rz) * rotateY(ry) * rotateX(rx))
# position is a point in world space for the camera.
self.position = worldPos
# Construct the world-to-view matrix, which is the inverse of the
# view-to-world matrix.
self.view = self.orientation.T * translate(-self.position)
# coi is the "center of interest". It defines a point that is coi
# units in front of the camera, which is the pivot for the tumble
# operation.
self.coi = coi
def tumble(self, azimuth, elevation):
'''Tumble the camera around the center of interest.
Azimuth is the number of radians to rotate around the world-space Y axis.
Elevation is the number of radians to rotate around the view-space X axis.
'''
# Find the world space pivot point. This is the view position in world
# space minus the view direction vector scaled by the center of
# interest distance.
pivotPos = self.position - (self.coi * self.orientation.T[2]).T
# Construct the azimuth and elevation transformation matrices
azimuthMatrix = rotate(0, -azimuth, 0, pivotPos)
elevationMatrix = rotate(elevation, 0, 0, self.view * pivotPos)
# Get the new view matrix
self.view = elevationMatrix * self.view * azimuthMatrix
# Extract the orientation from the new view matrix
self.orientation = np.matrix(self.view).T
self.orientation.T[3] = [0, 0, 0, 1]
# Now extract the new view position
negEye = self.orientation * self.view
self.position = -(negEye.T[3]).T
self.position[3, 0] = 1
np.set_printoptions(precision=3)
pos = np.matrix([[5.321, 5.866, 4.383, 1]]).T
orientation = radians(-60), radians(40), 0
coi = 1
camera = Camera(pos, *orientation, coi=coi)
print 'Initial attributes:'
print np.round(np.degrees(eulerAnglesXYZ(camera.orientation)), 3)
print np.round(camera.position, 3)
print 'Attributes after tumbling:'
camera.tumble(azimuth=radians(-40), elevation=radians(-60))
print np.round(np.degrees(eulerAnglesXYZ(camera.orientation)), 3)
print np.round(camera.position, 3)
Keep track of you view and right vectors, from the beginning and update them with the rotation matrix. Then calculate your up vector.

Convert axes coordinates to pixel coordinates

I'm looking for an efficient way to convert axes coordinates to pixel coordinates for multiple screen resolutions.
For example if had a data set of values for temperature over time, something like:
int temps[] = {-8, -5, -4, 0, 1, 0, 3};
int times[] = {0, 12, 16, 30, 42, 50, 57};
What's the most efficient way to transform the dataset to pixel coordinates so I could draw a graph on a 800x600 screen.
Assuming you're going from TEMP_MIN to TEMP_MAX, just do:
y[i] = (int)((float)(temps[i] - TEMP_MIN) * ((float)Y_MAX / (float)(TEMP_MAX - TEMP_MIN)));
where #define Y_MAX (600). Similarly for the x-coordinate. This isn't tested, so you may need to modify it slightly to deal with the edge-case (temps[i] == TEMP_MAX) properly.
You first need to determine the maximum and minimum values along each axis. Then you can do:
x_coord[i] = (x_val[i] - x_max) * X_RES / (x_max - x_min);
...and the same for Y. (Although you will probably want to invert the Y axis).