I'm stuck on a simple problem (I just started using the lib):
What is the most effective way to create sets of objects, replicate them and transform them?
Creating a bunch of circles around a center point:
var height = 150,
width = 150,
branchRadius = 20,
centerX = width / 2,
centerY = height / 2,
treeCrownCenterOffset = 10,
treeCrownRotationCenterX = centerX,
treeCrownRotationCenterY = centerY - treeCrownCenterOffset,
rotationCenter = [treeCrownRotationCenterX , treeCrownRotationCenterY],
paper = Raphael('logo', height , width ),
outerCircle = paper.circle(treeCrownRotationCenterX, treeCrownRotationCenterY, branchRadius).attr({fill:'#F00'}),
innerCircle = paper.circle(treeCrownRotationCenterX, treeCrownRotationCenterY, branchRadius - 4).attr({fill:'#000'}),
branch = paper.set([outerCircle, innerCircle]),
crown = paper.set(),
numCircles = 8, rotationStep = 360 / numCircles, i = 0;
for(; i < numCircles - 1 ; i++) {
//Cloning a branch and pushing it to the crown after transforming it
crown.push(branch.clone().transform('r' + ((i + 1) * rotationStep) + ',' + rotationCenter));
}
//Putting a second crown 200px right of the first crown
//Yes, I'm building a tree :-) No trunk at this point
crown.clone().transform('t200,0');
If you like violin, here is the fiddle.
This is my naive code, thinking a set of cloned sets (the crown of cloned branches) will indeed be moved to position (200, 0), next to the first crown.
It doesn't work: looks like a cloned set of cloned elements cannot be moved:
crown.clone().transform('t200,0');
Not much happens when this line is executed.
Seems like "cloning" is not doing what I expect, and that the transformations are not carried to the second (cloned) collection of objects.
The basic question is:
How does one go about creating reusable objects with Raphael?
Thanks.
You are cloning the set, but since your canvas is only 150px wide, translating it by 200 pixels is sending it off the reservation :)
When you do expand the size of the canvas, however, you will see that only one circle appears to have been cloned. This is not the case. The problem is not with the cloning but the transformation.
I find transformations to be a huge headache. The line "crown.clone().transform('t200,0');" is applying that transformation to each object in the set, but I believe it is overriding the rotation. Even if it weren't, it would be applying the translation after the rotating, sending the circles scattering as if by centrifugal force.
I know you wanted to avoid looping through the cloned set, but this works:
var crown2 = crown.clone();
for (i = 0; i < crown2.length; i ++) {
crown2[i].transform('t200,0r' + (i * rotationStep) + ',' + rotationCenter);
}
Also, note that you didn't add the original branch to the set. You need this:
branch = paper.set([outerCircle, innerCircle]),
crown = paper.set(),
numCircles = 8, rotationStep = 360 / numCircles, i = 0;
//ADD ORIGINAL BRANCH TO SET
crown.push(branch);
Updated fiddle.
Related
I am trying to display two cams next to each other, rotated by 90°. Displaying both cams works fine, but if I want to rotate the cams, the program crashes.
The camera is read with a QByteArray and shown with the QCamera variable.
You can choose which camera is displayed in which viewfinder, so it has a code like this:
QActionGroup *videoDevicesGroup = new QActionGroup(this);
videoDevicesGroup->setExclusive(true);
foreach(const QByteArray &deviceName, QCamera::availableDevices()) {
QString description = camera->deviceDescription(deviceName);
QAction *videoDeviceAction = new QAction(description, videoDevicesGroup);
videoDeviceAction->setCheckable(true);
videoDeviceAction->setData(QVariant(deviceName));
if (cameraDevice.isEmpty()) {
cameraDevice = deviceName;
videoDeviceAction->setChecked(true);
}
ui->menuDevices->addAction(videoDeviceAction);
}
connect(videoDevicesGroup, SIGNAL(triggered(QAction*)), SLOT(updateCameraDevice(QAction*)));
if (cameraDevice.isEmpty())
{
camera = new QCamera;
}
else
{
camera = new QCamera(cameraDevice);
}
connect(camera, SIGNAL(stateChanged(QCamera::State)), this, SLOT(updateCameraState(QCamera::State)));
connect(camera, SIGNAL(error(QCamera::Error)), this, SLOT(displayCameraError()));
camera->setViewfinder(ui->viewfinder);
updateCameraState(camera->state());
camera->start();
Now I'm trying to rotate this cam with the command:
std::roate_copy(cameraDevice.constBegin(), cameraDevice.constEnd(), cameraDevice.constEnd, reverse.begin());
camera = new QCamera(reverse);
But when I try to start the program the program crashes, without any errors.
How can I fix this?
I think you have a misunderstanding on what std::rotate_copy does.
std::rotate_copy takes a range of data and shifts it as it copies into the location pointed to by the result iterator.
This won't rotate a camera. It just shifts and copies ranges: http://www.cplusplus.com/reference/algorithm/rotate_copy
EDIT:
Think about it this way say I have: std::string("wroybgivb");
Now say I do str::rotate_copy and I pick the "y" as my middle the std::string that I copied into will contain "ybgivbwro".
Now think about that like I was working with a 3X3 image and each character represented a color:
wro ybg
ybg => ivb
ivb wro
Note that this is doing a linear array rotation (position shifting). I can never pick a middle such that rows will become columns and columns will become rows.
PS:
OK so say that you knew the width of the image, and assigned it to the variable width. You can do something like this to rotate 90° clockwise:
for(int x = 0; x < size; ++x){
output[width - 1 - i / width + (i % width) * width] = input[i];
}
To understand this you need to understand indexing a linear array as though it's a 2D array.
Use this to access the x coordinate: i % width
Use this to access the y coordinate: (i / width) * width
Now you need to take those indices and rotate them still inside a linear array.
Use this to access the x coordinate: width - 1 - i / width
Use this to access the y coordinate: (i % width) * width
currently I am developing a tool for the Kinect for Windows v2 (similar to the one in XBOX ONE). I tried to follow some examples, and have a working example that shows the camera image, the depth image, and an image that maps the depth to the rgb using opencv. But I see that it duplicates my hand when doing the mapping, and I think it is due to something wrong in the coordinate mapper part.
here is an example of it:
And here is the code snippet that creates the image (rgbd image in the example)
void KinectViewer::create_rgbd(cv::Mat& depth_im, cv::Mat& rgb_im, cv::Mat& rgbd_im){
HRESULT hr = m_pCoordinateMapper->MapDepthFrameToColorSpace(cDepthWidth * cDepthHeight, (UINT16*)depth_im.data, cDepthWidth * cDepthHeight, m_pColorCoordinates);
rgbd_im = cv::Mat::zeros(depth_im.rows, depth_im.cols, CV_8UC3);
double minVal, maxVal;
cv::minMaxLoc(depth_im, &minVal, &maxVal);
for (int i=0; i < cDepthHeight; i++){
for (int j=0; j < cDepthWidth; j++){
if (depth_im.at<UINT16>(i, j) > 0 && depth_im.at<UINT16>(i, j) < maxVal * (max_z / 100) && depth_im.at<UINT16>(i, j) > maxVal * min_z /100){
double a = i * cDepthWidth + j;
ColorSpacePoint colorPoint = m_pColorCoordinates[i*cDepthWidth+j];
int colorX = (int)(floor(colorPoint.X + 0.5));
int colorY = (int)(floor(colorPoint.Y + 0.5));
if ((colorX >= 0) && (colorX < cColorWidth) && (colorY >= 0) && (colorY < cColorHeight))
{
rgbd_im.at<cv::Vec3b>(i, j) = rgb_im.at<cv::Vec3b>(colorY, colorX);
}
}
}
}
}
Does anyone have a clue of how to solve this? How to prevent this duplication?
Thanks in advance
UPDATE:
If I do a simple depth image thresholding I obtain the following image:
This is what more or less I expected to happen, and not having a duplicate hand in the background. Is there a way to prevent this duplicate hand in the background?
I suggest you use the BodyIndexFrame to identify whether a specific value belongs to a player or not. This way, you can reject any RGB pixel that does not belong to a player and keep the rest of them. I do not think that CoordinateMapper is lying.
A few notes:
Include the BodyIndexFrame source to your frame reader
Use MapColorFrameToDepthSpace instead of MapDepthFrameToColorSpace; this way, you'll get the HD image for the foreground
Find the corresponding DepthSpacePoint and depthX, depthY, instead of ColorSpacePoint and colorX, colorY
Here is my approach when a frame arrives (it's in C#):
depthFrame.CopyFrameDataToArray(_depthData);
colorFrame.CopyConvertedFrameDataToArray(_colorData, ColorImageFormat.Bgra);
bodyIndexFrame.CopyFrameDataToArray(_bodyData);
_coordinateMapper.MapColorFrameToDepthSpace(_depthData, _depthPoints);
Array.Clear(_displayPixels, 0, _displayPixels.Length);
for (int colorIndex = 0; colorIndex < _depthPoints.Length; ++colorIndex)
{
DepthSpacePoint depthPoint = _depthPoints[colorIndex];
if (!float.IsNegativeInfinity(depthPoint.X) && !float.IsNegativeInfinity(depthPoint.Y))
{
int depthX = (int)(depthPoint.X + 0.5f);
int depthY = (int)(depthPoint.Y + 0.5f);
if ((depthX >= 0) && (depthX < _depthWidth) && (depthY >= 0) && (depthY < _depthHeight))
{
int depthIndex = (depthY * _depthWidth) + depthX;
byte player = _bodyData[depthIndex];
// Identify whether the point belongs to a player
if (player != 0xff)
{
int sourceIndex = colorIndex * BYTES_PER_PIXEL;
_displayPixels[sourceIndex] = _colorData[sourceIndex++]; // B
_displayPixels[sourceIndex] = _colorData[sourceIndex++]; // G
_displayPixels[sourceIndex] = _colorData[sourceIndex++]; // R
_displayPixels[sourceIndex] = 0xff; // A
}
}
}
}
Here is the initialization of the arrays:
BYTES_PER_PIXEL = (PixelFormats.Bgr32.BitsPerPixel + 7) / 8;
_colorWidth = colorFrame.FrameDescription.Width;
_colorHeight = colorFrame.FrameDescription.Height;
_depthWidth = depthFrame.FrameDescription.Width;
_depthHeight = depthFrame.FrameDescription.Height;
_bodyIndexWidth = bodyIndexFrame.FrameDescription.Width;
_bodyIndexHeight = bodyIndexFrame.FrameDescription.Height;
_depthData = new ushort[_depthWidth * _depthHeight];
_bodyData = new byte[_depthWidth * _depthHeight];
_colorData = new byte[_colorWidth * _colorHeight * BYTES_PER_PIXEL];
_displayPixels = new byte[_colorWidth * _colorHeight * BYTES_PER_PIXEL];
_depthPoints = new DepthSpacePoint[_colorWidth * _colorHeight];
Notice that the _depthPoints array has a 1920x1080 size.
Once again, the most important thing is to use the BodyIndexFrame source.
Finally I get some time to write the long awaited answer.
Lets start with some theory to understand what is really happening and then a possible answer.
We should start by knowing the way to pass from a 3D point cloud which has the depth camera as the coordinate system origin to an image in the image plane of the RGB camera. To do that it is enough to use the camera pinhole model:
In here, u and v are the coordinates in the image plane of the RGB camera. the first matrix in the right side of the equation is the camera matrix, AKA intrinsics of the RGB Camera. The following matrix is the rotation and translation of the extrinsics, or better said, the transformation needed to go from the Depth camera coordinate system to the RGB camera coordinate system. The last part is the 3D point.
Basically, something like this, is what the Kinect SDK does. So, what could go wrong that makes the hand gets duplicated? well, actually more than one point projects to the same pixel....
To put it in other words and in the context of the problem in the question.
The depth image, is a representation of an ordered point cloud, and I am querying the u v values of each of its pixels that in reality can be easily converted to 3D points. The SDK gives you the projection, but it can point to the same pixel (usually, the more distance in the z axis between two neighbor points may give this problem quite easily.
Now, the big question, how can you avoid this.... well, I am not sure using the Kinect SDK, since you do not know the Z value of the points AFTER the extrinsics are applied, so it is not possible to use a technique like the Z buffering.... However, you may assume the Z value will be quite similar and use those from the original pointcloud (at your own risk).
If you were doing it manually, and not with the SDK, you can apply the Extrinsics to the points, and the use the project them into the image plane, marking in another matrix which point is mapped to which pixel and if there is one existing point already mapped, check the z values and compared them and always leave the closest point to the camera. Then, you will have a valid mapping without any problems. This way is kind of a naive way, probably you can get better ones, since the problem is now clear :)
I hope it is clear enough.
P.S.:
I do not have Kinect 2 at the moment so I can'T try to see if there is an update relative to this issue or if it still happening the same thing. I used the first released version (not pre release) of the SDK... So, a lot of changes may had happened... If someone knows if this was solve just leave a comment :)
I have set up a scene in SceneKit and have issued a hit-test to select an item. However, I want to be able to move that item along a plane in my scene. I continue to receive mouse drag events, but don't know how to transform those 2D coordinates into 3D coordinate in the scene.
My case is very simple. The camera is located at 0, 0, 50 and pointed at 0, 0, 0. I just want to drag my object along the z-plane with a z-value of 0.
The hit-test works like a charm, but how do I translate the mouse point from a drag event into a new position in the scene for the 3D object I am dragging?
You don't need to use invisible geometry — Scene Kit can do all the coordinate conversions you need without having to hit test invisible objects. Basically you need to do the same thing you would in a 2D drawing app for moving an object: find the offset between the mouseDown: location and the object position, then for each mouseMoved:, add that offset to the new mouse location to set the object's new position.
Here's an approach you could use...
Hit-test the initial click location as you're already doing. This gets you an SCNHitTestResult object identifying the node you want to move, right?
Check the worldCoordinates property of that hit test result. If the node you want to move is a child of the scene's rootNode, these is the vector you want for finding the offset. (Otherwise you'll need to convert it to the coordinate system of the parent of the node you want to move — see convertPosition:toNode: or convertPosition:fromNode:.)
You're going to need a reference depth for this point so you can compare mouseMoved: locations to it. Use projectPoint: to convert the vector you got in step 2 (a point in the 3D scene) back to screen space — this gets you a 3D vector whose x- and y-coordinates are a screen-space point and whose z-coordinate tells you the depth of that point relative to the clipping planes (0.0 is on the near plane, 1.0 is on the far plane). Hold onto this z-coordinate for use during mouseMoved:.
Subtract the position of the node you want to move from the mouse location vector you got in step 2. This gets you the offset of the mouse click from the object's position. Hold onto this vector — you'll need it until dragging ends.
On mouseMoved:, construct a new 3D vector from the screen coordinates of the new mouse location and the depth value you got in step 3. Then, convert this vector into scene coordinates using unprojectPoint: — this is the mouse location in your scene's 3D space (equivalent to the one you got from the hit test, but without needing to "hit" scene geometry).
Add the offset you got in step 3 to the new location you got in step 5 - this is the new position to move the node to. (Note: for live dragging to look right, you should make sure this position change isn't animated. By default the duration of the current SCNTransaction is zero, so you don't need to worry about this unless you've changed it already.)
(This is sort of off the top of my head, so you should probably double-check the relevant docs and headers. And you might be able to simplify this a bit with some math.)
As an experiment I implemented Mr Bishop's helpful answer. The drag doesn't quite work (the object - a chess piece - jumps off screen) because of differences in the coordinate magnitudes between the mouse click and the 3-D world. I've inserted log outputs here and there among the code.
I asked on the Apple forums if anyone knew the secret sauce to homogenize the coordinates but didn't get a decisive answer. One thing, I had made some experimental changes to Mr Bishop's method and the forum members advised me to return to his technique.
Despite my code's failings, I thought someone might find it a useful starting point. I suspect there are only one or two small problems with the code.
Note that the log of the world transform matrix of the object (chess piece) is not part of the process but one Apple forum member advised me that the matrix often offers a useful 'sanity check' - which indeed it did.
- (NSPoint)
viewPointForEvent: (NSEvent *) event_
{
NSPoint windowPoint = [event_ locationInWindow];
NSPoint viewPoint = [self.view convertPoint: windowPoint
fromView: nil];
return viewPoint;
}
- (SCNHitTestResult *)
hitTestResultForEvent: (NSEvent *) event_
{
NSPoint viewPoint = [self viewPointForEvent: event_];
CGPoint cgPoint = CGPointMake (viewPoint.x, viewPoint.y);
NSArray * points = [(SCNView *) self.view hitTest: cgPoint
options: #{}];
return points.firstObject;
}
- (void)
mouseDown: (NSEvent *) theEvent
{
SCNHitTestResult * result = [self hitTestResultForEvent: theEvent];
SCNVector3 clickWorldCoordinates = result.worldCoordinates;
log output: clickWorldCoordinates x 208.124578, y -12827.223365, z 3163.659073
SCNVector3 screenCoordinates = [(SCNView *) self.view projectPoint: clickWorldCoordinates];
log output: screenCoordinates x 245.128906, y 149.335938, z 0.985565
// save the z coordinate for use in mouseDragged
mouseDownClickOnObjectZCoordinate = screenCoordinates.z;
selectedPiece = result.node; // save selected piece for use in mouseDragged
SCNVector3 piecePosition = selectedPiece.position;
log output: piecePosition x -18.200000, y 6.483060, z 2.350000
offsetOfMouseClickFromPiece.x = clickWorldCoordinates.x - piecePosition.x;
offsetOfMouseClickFromPiece.y = clickWorldCoordinates.y - piecePosition.y;
offsetOfMouseClickFromPiece.z = clickWorldCoordinates.z - piecePosition.z;
log output: offsetOfMouseClickFromPiece x 226.324578, y -12833.706425, z 3161.309073
}
- (void)
mouseDragged: (NSEvent *) theEvent;
{
NSPoint viewClickPoint = [self viewPointForEvent: theEvent];
SCNVector3 clickCoordinates;
clickCoordinates.x = viewClickPoint.x;
clickCoordinates.y = viewClickPoint.y;
clickCoordinates.z = mouseDownClickOnObjectZCoordinate;
log output: clickCoordinates x 246.128906, y 0.000000, z 0.985565
log output: pieceWorldTransform:
m11 = 242.15889219510001, m12 = -0.000045609300002524833, m13 = -0.00000721691076126, m14 = 0,
m21 = 0.0000072168760805499971, m22 = -0.000039452697396149999, m23 = 242.15890446329999, m24 = 0,
m31 = -0.000045609300002524833, m32 = -242.15889219510001, m33 = -0.000039452676995750002, m34 = 0,
m41 = -4268.2349924762348, m42 = -12724.050221935429, m43 = 4852.6652710104272, m44 = 1)
SCNVector3 newPiecePosition;
newPiecePosition.x = offsetOfMouseClickFromPiece.x + clickCoordinates.x;
newPiecePosition.y = offsetOfMouseClickFromPiece.y + clickCoordinates.y;
newPiecePosition.z = offsetOfMouseClickFromPiece.z + clickCoordinates.z;
log output: newPiecePosition x 472.453484, y -12833.706425, z 3162.294639
selectedPiece.position = newPiecePosition;
}
I used the code written by Steve and with little modification it worked for me.
On mouseDown I save clickWorldCoordinates on a property called startClickWorldCoordinates.
On mouseDragged I calculate the selectedPiece position in this way:
SCNVector3 worldClickCoordinate = [(SCNView *) self.view unprojectPoint:clickCoordinates.x];
newPiecePosition.x = selectedPiece.position.x + worldClickCoordinate.x - startClickWorldCoordinates.x;
newPiecePosition.y = selectedPiece.position.y + worldClickCoordinate.y - startClickWorldCoordinates.y;
newPiecePosition.z = selectedPiece.position.z + worldClickCoordinate.z - startClickWorldCoordinates.z;
selectedPiece.position = newPiecePosition;
startClickWorldCoordinates = worldClickCoordinate;
Or, perhaps, not warped enough...
So I'm trying to take an image and specify four corners - then move those four corners into a (near-)perfect square in the middle of the image.
UPDATE 1 (At bottom): 9/10/13 # 8:20PM GMT The math & matrix tags were added with this update. (If your read this update before 8:20, I must apologize, I gave you really bad info!)
I don't need it super-accurate, but my current results are very clearly not working, yet after looking at multiple other examples I cannot see what I've been doing wrong.
Here is my mock-up:
And through some magical process I obtain the coordinates. For this 640x480 mock-up the points are as follows:
Corners:
Upper-Left: 186, 87
Upper-Right: 471, 81
Lower-Left: 153, 350
Lower-Right: 500, 352
And the points I want to move the corners to are as follows:
Upper-Left: 176, 96
Upper-Right: 464, 96
Lower-Left: 176, 384
Lower-Right: 464, 384
Now, the end-goal here is to get the coordinates of the black dot-thing relative to the corners. I'm not making the box fill up the entire image because that point in the center could be outside the box given a different picture, so I want to keep enough "outside the box" room to generalize the process. I know I can get the point after it's moved, I'm just having trouble moving it correctly. My current warpPerspective attempts provide the following result:
Ok, so it looks like it's trying to fit things properly, but the corners didn't actually end up where we thought they would. The top-left is too far to the right. The Bottom-Left is too high, and the two on the right are both too far right and too close together. Well, ok... so lets try expanding the destination coordinates so it fills up the screen.
Just seems zoomed in? Are my coordinates somehow off? Do I need to feed it new coordinates? Source, destination, or both?
Here's my code: (This is obviously edited down to the key pieces of information, but if I missed something, please ask me to re-include it.)
Mat frame = inputPicture;
Point2f sourceCoords[4], destinationCoords[4];
// These values were pre-determined, and given above for this image.
sourceCoords[0].x = UpperLeft.X;
sourceCoords[0].y = UpperLeft.Y;
sourceCoords[1].x = UpperRight.X;
sourceCoords[1].y = UpperRight.Y;
sourceCoords[2].x = LowerLeft.X;
sourceCoords[2].y = LowerLeft.Y;
sourceCoords[3].x = LowerRight.X;
sourceCoords[3].y = LowerRight.Y;
// We need to make a square in the image. The 'if' is just in case the
// picture used is not longer left-to-right than it is top-to-bottom.
int top = 0;
int bottom = 0;
int left = 0;
int right = 0;
if (frame.cols >= frame.rows)
{
int longSideMidpoint = frame.cols/2.0;
int shortSideFifthpoint = frame.rows/5.0;
int shortSideTenthpoint = frame.rows/10.0;
top = shortSideFifthpoint;
bottom = shortSideFifthpoint*4;
left = longSideMidpoint - (3*shortSideTenthpoint);
right = longSideMidpoint + (3*shortSideTenthpoint);
}
else
{
int longSideMidpoint = frame.rows/2.0;
int shortSideFifthpoint = fFrame.cols/5.0;
int shortSideTenthpoint = frame.cols/10.0;
top = longSideMidpoint - (3*shortSideTenthpoint);
bottom = longSideMidpoint + (3*shortSideTenthpoint);
left = shortSideFifthpoint;
right = shortSideFifthpoint*4;
}
// This code was used instead when putting the destination coords on the edges.
//top = 0;
//bottom = frame.rows-1;
//left = 0;
//right = frame.cols-1;
destinationCoords[0].y = left; // UpperLeft
destinationCoords[0].x = top; // UL
destinationCoords[1].y = right; // UpperRight
destinationCoords[1].x = top; // UR
destinationCoords[2].y = left; // LowerLeft
destinationCoords[2].x = bottom; // LL
destinationCoords[3].y = right; // LowerRight
destinationCoords[3].x = bottom; // LR
Mat warp_matrix = cvCreateMat(3, 3, CV_32FC1);
warp_matrix = getPerspectiveTransform(sourceCoords, destinationCoords); // This seems to set the warp_matrix to 3x3 even if it isn't.
warpPerspective(frame, frame, warp_matrix, frame.size(), CV_INTER_LINEAR, 0);
IplImage *savePic = new IplImage(frame);
sprintf(fileName, "test/%s/photo%i-7_warp.bmp", startupTime, Count);
cvSaveImage(fileName, savePic);
delete savePic;
I've also tried using perspectiveTransform, but that lead to the error:
OpenCV Error: Assertion failed (scn + 1 == m.cols && (depth == CV_32F
|| depth == CV_64F)) in unknown function, file
C:\slace\builds\WinInstallerMegaPack\src\opencv\modules\core\src\matmul.cpp,
line 1926
which lead me to trying getHomography leading to THIS error instead:
OpenCV Error: Assertion failed (npoints >= 0 && points2.checkVector(2)
== npoints && points1.type()) in unknown function, file C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\calib3d\src\fundam.cpp,
line 1074
(I checked - npoints IS greater than zero, because it equals points1.checkVector(2) - it's failing because points1.checkVector(2) and points2.checkVector(2) don't match - but I don't understand what checkVector(2) does. points1 and points2 are taken from the coordinates I tried feeding getHomography - which were the same coordinates as above.
Any idea how to get the output I'm looking for? I've been left confused for a while now. :/
^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^
UPDATE 1:
Ok, so I've figured out how it's supposed to calculate things.
getPerspectiveTransform is supposed to find a 3x3 matrix such that:
|M00 M10 M20| |X| |c*X'|
|M01 M11 M21| * |Y| = |c*Y'|
|M02 M12 M22| |1| | c |
Where MXX is the constant matrix, X & Y are the input coordinates, and X' & Y' are the output coordinates. This has to work for all four input/output coordinate combos. (The idea is simple, even if the math to get there may not be - I'm still not sure how they're supposed to get that matrix... input here would be appreciated - since I only need one actual coordinate I would not mind just bypassing getPerspectiveTransform and WarpPerspective entirely and just using a mathematical solution.)
After you've gotten the perspective transform matrix, WarpPerspective basically just moves each pixel's coordinates by multiplying:
|M00 M10 M20| |X|
|M01 M11 M21| * |Y|
|M02 M12 M22| |1|
for each coordinate. Then dividing cX' & cY' both by c (obviously). Then some averaging needs to be done since the results are unlikely to be perfect integers.
Ok, the basic idea is easy enough but here's the problem; getPerspectiveTransform does not seem to be working!
In my above example I modified the program to print out the matrix it was getting from getPerspectiveTransform. It gave me this:
|1.559647 0.043635 -37.808761|
|0.305521 1.174385 -50.688854|
|0.000915 0.000132 1.000000|
In my above example, I gave for the upper-left coordinate 186, 87 to be moved to 176, 96. Unfortunately when I multiply the above warp_matrix with the input coordinate (186, 87) I get not 176, 96 - but 217, 92! Which is the same result WarpPerspective gets. So at least we know WarpPerspective is working...
Um, Alex...
Your X & Y coordinates are reversed on both your Source and Destination coordinates.
In other words:
sourceCoords[0].x = UpperLeft.X;
sourceCoords[0].y = UpperLeft.Y;
should be
sourceCoords[0].x = UpperLeft.Y;
sourceCoords[0].y = UpperLeft.X;
and
destinationCoords[0].y = top;
destinationCoords[0].x = left;
should be
destinationCoords[0].y = left;
destinationCoords[0].x = top;
Jus' thought you'd like to know.
I've made a QGraphicsScene with a mouseClickEvent that lets the user create blue squares inside of it. But I want to make the scene grow when an item is placed against its border so that the user never runs out of space on the graphics scene.
What's the best way to make a graphics scene bigger in this case?
I suggest doing something like the following:
Get the bounding rect of all items in the scene using QGraphicsScene::itemsBoundingRect().
Add some padding around that rect to make sure the bounds of the items won't hit the edge of the view. Something like myRect.adjust(-20, -20, 20, 20) should be sufficient.
Use QGraphicsView::fitInView(myRect, Qt::KeepAspectRatio) to ensure the taken area is within the visible bounds of the view.
That should do it. This code should be called whenever something has changed in the scene. You can use QRectF::intersects() function to find out if the new rect has been placed on the edge of the view.
What's the best way to make a graphics scene bigger in this case?
The GraphicsScene is an infinite coordinate system. Most clients will use itemsBoundingRect() to get an idea how much space is actually used by items in the scene. If you have cleared the scene, you might want to call QGraphicsScene::setSceneRect(QRectF()) to "make it smaller" again.
Hope that helps.
sorry if this is a little bit late(6 years) but I will provide an answer if someone still struggling with this or want another approach.I implement this in mouseReleaseEvent in the custom class derive from QGraphicsObject. Note that I initialize the size of my QGraphicsScene (1000,1000) with the following code.scene->setSceneRect(0,0,1000,1000). So here what my code will do. If the Item(the item is draggable) placed against the border, that border will increase. So here is my code:
void MyItem::mouseReleaseEvent(QgraphicsceneMouseEvent* event){
QRectF tempRect = this->scene()->sceneRect();
if(this->scenePos().y() < this->scene()->sceneRect().top()){
tempRect.adjust(0,-200,0,0);
if(this->scenePos().x() < this->scene()->sceneRect().left()){
tempRect.adjust(-200,0,0,0);
}
else if(this->scenePos().x() + 200> this->scene()->sceneRect().right()){
tempRect.adjust(0,0,200,0);
}
}
else if(this->scenePos().y() + 200 > this->scene()->sceneRect().bottom()){
tempRect.adjust(0,0,0,200);
if(this->scenePos().x() < this->scene()->sceneRect().left()){
tempRect.adjust(-200,0,0,0);
}
else if(this->scenePos().x() + 200> this->scene()->sceneRect().right()){
tempRect.adjust(0,0,200,0);
}
}
else if(this->scenePos().x() < this->scene()->sceneRect().left()){
tempRect.adjust(-200,0,0,0);
if(this->scenePos().y() < this->scene()->sceneRect().top()){
tempRect.adjust(0,-200,0,0);
}
else if(this->scenePos().y() + 200 > this->scene()->sceneRect().bottom()){
tempRect.adjust(0,0,0,200);
}
}
else if(this->scenePos().x() + 200> this->scene()->sceneRect().right()){
tempRect.adjust(0,0,200,0);
if(this->scenePos().y() < this->scene()->sceneRect().top()){
tempRect.adjust(0,-200,0,0);
}
else if(this->scenePos().y() + 200 > this->scene()->sceneRect().bottom()){
tempRect.adjust(0,0,0,200);
}
}
this->scene()->setSceneRect(tempRect);
I know its late, but for anyone looking for python code here:
class Scene(QtWidgets.QGraphicsScene):
def __init__(self):
super(Scene, self).__init__()
self.setSceneRect(0, 0, 2000, 2000)
self.sceneRect().adjust(-20, -20, 20, 20)
self.old_rect = self.itemsBoundingRect()
def adjust(self):
w = self.sceneRect().width()
h = self.sceneRect().height()
x = self.sceneRect().x()
y = self.sceneRect().y()
adjust_factor = 500
adjust_factor2 = 300
smaller = self.is_smaller()
self.old_rect = self.itemsBoundingRect()
if not self.sceneRect().contains(self.old_rect):
self.setSceneRect(-adjust_factor + x, -adjust_factor + y, adjust_factor + w, adjust_factor + h)
if smaller:
self.setSceneRect(adjust_factor2 + x, adjust_factor2 + y, abs(adjust_factor2 - w), abs(adjust_factor2 - h))
def is_smaller(self):
x = self.old_rect.x()
y = self.old_rect.y()
h = self.old_rect.height()
w = self.old_rect.width()
if ((x <= self.itemsBoundingRect().x()) and (y <= self.itemsBoundingRect().y())
and (h > self.itemsBoundingRect().height()) and (w > self.itemsBoundingRect().width())):
return True
return False
Explanation:
use self.sceneRect().contains(self.itemBoundingRect) check whether the itemBoundingRect is within the sceneRect, if its not in the sceneRect then use self.setSceneRect() to increase the sceneRect size
(Note: make sure you add to the previous sceneRect like shown in the above code).
If you also want to decrease the sceneRect. Store the old itemBoundingRect and compare it with the new one, if the new itemSceneRect Rectangle is smaller then decrease the size by some factor (refer to the above code).
Usage:
you may call the adjust method from anywhere you like. But Calling the adjust method from mouseReleaseEvent worked the best for me.
*If you have any suggestions or query you may comment.